opensource.com

Subscribe to opensource.com feed
Updated: 2 hours 36 min ago

Learn Expect by writing and automating a simple game

Mon, 02/13/2023 - 16:00
Learn Expect by writing and automating a simple game JamesF Mon, 02/13/2023 - 03:00

While trying to automate my workflow, I hit upon a configuration utility that defied meaningful automation. It was a Java process that didn't support a silent installer, or support stdin, and had an inconsistent set of prompts. Ansible's expect module was inadequate for this task. But I found that the expect command was just the tool for the job.

My journey to learn Expect meant learning a bit of Tcl. Now that I have the background to create simple programs, I can better learn to program in Expect. I thought it would be fun to write an article that demonstrates the cool functionality of this venerable utility.

This article goes beyond the typical simple game format. I plan to use parts of Expect to create the game itself. Then I demonstrate the real power of Expect with a separate script to automate playing the game.

This programming exercise shows several classic programming examples of variables, input, output, conditional evaluation, and loops.

Install Expect

For Linux based systems use:

$ sudo dnf install expect $ which expect /bin/expect

I found that my version of Expect was included in the base operating system of macOS:

$ which expect /usr/bin/expect

On macOS, you can also load a slightly newer version using brew:

$ brew install expect $ which expect /usr/local/bin/expectGuess the number in Expect

The number guessing game using Expect is not that different from the base Tcl I used in my previous article

All things in Tcl are strings, including variable values. Code lines are best contained by curly braces (Instead of trying to use line continuation). Square brackets are used for command substitution. Command substitution is useful for deriving values from other functions. It can be used directly as input where needed. You can see all of this in the subsequent script.

Create a new game file numgame.exp, set it to be executable, and then enter the script below:

#!/usr/bin/expect proc used_time {start} { return [expr [clock seconds] - $start] } set num [expr round(rand()*100)] set starttime [clock seconds] set guess -1 set count 0 send "Guess a number between 1 and 100\n" while { $guess != $num } { incr count send "==> " expect { -re "^(\[0-9]+)\n" { send "Read in: $expect_out(1,string)\n" set guess $expect_out(1,string) } -re "^(.*)\n" { send "Invalid entry: $expect_out(1,string) " } } if { $guess < $num } { send "Too small, try again\n" } elseif { $guess > $num } { send "Too large, try again\n" } else { send "That's right!\n" } } set used [used_time $starttime] send "You guessed value $num after $count tries and $used elapsed seconds\n"

Using proc sets up a function (or procedure) definition. This consists of the name of the function, followed by a list containing the parameters (1 parameter {start}) and then followed by the function body. The return statement shows a good example of nested Tcl command substitution. The set statements define variables. The first two use command substitution to store a random number and the current system time in seconds.

The while loop and if-elseif-else logic should be familiar. Note again the particular placement of the curly braces to help group multiple command strings together without needing line continuation.

The big difference you see here (from the previous Tcl program) is the use of the functions expect and send rather than using puts and gets. Using expect and send form the core of Expect program automation. In this case, you use these functions to automate a human at a terminal. Later you can automate a real program. Using the send command in this context isn't much more than printing information to screen. The expect command is a bit more complex.

The expect command can take a few different forms depending on the complexity of your processing needs. The typical use consists of one of more pattern-action pairs such as:

expect "pattern1" {action1} "pattern2" {action2}

More complex needs can place multiple pattern action pairs within curly braces optionally prefixed with options that alter the processing logic. The form I used above encapsulates multiple pattern-action pairs. It uses the option -re to apply regex processing (instead of glob processing) to the pattern. It follows this with curly braces encapsulating one or more statements to execute. I've defined two patterns above. The first is Is intended to match a string of 1 or more numbers:

"^(\[0-9]+)\n"

 The second pattern is designed to match anything else that is not a string of numbers:

"^(.*)\n"

Take note that this use of expect is executed repeatedly from within a while statement. This is a perfectly valid approach to reading multiple entries. In the automation, I show a slight variation of Expect that does the iteration for you.

Finally, the $expect_out variable is an array used by expect to hold the results of its processing. In this case, the variable $expect_out(1,string) holds the first captured pattern of the regex.

Run the game

There should be no surprises here:

$ ./numgame.exp Guess a number between 1 and 100 ==> Too small, try again ==> 100 Read in: 100 Too large, try again ==> 50 Read in: 50 Too small, try again ==> 75 Read in: 75 Too small, try again ==> 85 Read in: 85 Too large, try again ==> 80 Read in: 80 Too small, try again ==> 82 Read in: 82 That's right! You guessed value 82 after 8 tries and 43 elapsed seconds

One difference you may notice is the impatience this version exhibits. If you hesitate long enough, expect timeouts with an invalid entry. It then prompts you again. This is different from gets which waits indefinitely. The expect timeout is a configurable feature. It helps deal with hung programs or during an unexpected output.

Automate the game in Expect

For this example, the Expect automation script needs to be in the same folder as your numgame.exp script. Create the automate.exp file, make it executable, open your editor, and enter the following:

#!/usr/bin/expect spawn ./numgame.exp set guess [expr round(rand()*100)] set min 0 set max 100 puts "I'm starting to guess using the number $guess" expect { -re "==> " { send "$guess\n" expect { "Too small" { set min $guess set guess [expr ($max+$min)/2] } "Too large" { set max $guess set guess [expr ($max+$min)/2] } -re "value (\[0-9]+) after (\[0-9]+) tries and (\[0-9]+)" { set tries $expect_out(2,string) set secs $expect_out(3,string) } } exp_continue } "elapsed seconds" } puts "I finished your game in about $secs seconds using $tries tries"

The spawn function executes the program you want to automate. It takes the command as separate strings followed by the arguments to pass to it. I set the initial number to guess, and the real fun begins. The expect statement is considerably more complicated and illustrates the power of this utility. Note that there is no looping statement here to iterate over the prompts. Because my game has predictable prompts, I can ask expect to do a little more processing for me. The outer expect attempts to match the game input prompt of `==>` . Seeing that, it uses send to guess and then uses an additional expect to figure out the results of the guess. Depending on the output, variables are adjusted and calculated to set up the next guess. When the prompt `==>` is matched, the exp_continue statement is invoked. That causes the outer expect to be re-evaluated. So a loop here is no longer needed.

This input processing relies on another behavior of Expect's processing. Expect buffers the terminal output until it matches a pattern. This buffering includes any embedded end of line and other unprintable characters. This is different than the typical regex line matching you are used to with Awk and Perl. When a pattern is matched, anything coming after the match remains in the buffer. It's made available for the next match attempt. I've exploited this to cleanly end the outer expect statement:

-re "value (\[0-9]+) after (\[0-9]+) tries and (\[0-9]+)"

You can see that the inner pattern matches the correct guess and does not consume all of the characters printed by the game. The very last part of the string (elapsed seconds) is still buffered after the successful guess. On the next evaluation of the outer expect , this string is matched from the buffer to cleanly end (no action is supplied). Now for the fun part, let's run the full automation:

$ ./automate.exp spawn ./numgame.exp I'm starting to guess with the number 99 Guess a number between 1 and 100 ==> 99 Read in: 99 Too large, try again ==> 49 Read in: 49 Too small, try again ==> 74 Read in: 74 Too large, try again ==> 61 Read in: 61 Too small, try again ==> 67 Read in: 67 That's right! You guessed value 67 after 5 tries and 0 elapsed seconds I finished your game in about 0 seconds using 5 tries

Programming and development Red Hat Developers Blog Programming cheat sheets Try for free: Red Hat Learning Subscription eBook: An introduction to programming with Bash Bash shell scripting cheat sheet eBook: Modernizing Enterprise Java An open source developer's guide to building applications

Wow! My number guessing efficiency dramatically increased thanks to automation! A few trial runs resulted in anywhere from 5-8 guesses on average. It also always completed in under 1 second. Now that this pesky, time-consuming fun can be dispatched so quickly, I have no excuse to delay other more important tasks like working on my home-improvement projects :P

Never stop learning

This article was a bit lengthy but well worth the effort. The number guessing game offered a good base for demonstrating a more interesting example of Expect processing. I learned quite a bit from the exercise and was able to complete my work automation successfully. I hope you found this programming example interesting and that it helps you to further your automation goals.

Code a "guess the number" game in Expect. Then, learn the real power of Expect with a separate script to automate playing the game.

Image by:

Opensource.com

Programming Automation What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How open source leaders can foster an inclusive environment

Fri, 02/10/2023 - 16:00
How open source leaders can foster an inclusive environment kcarcia Fri, 02/10/2023 - 03:00

Open source leaders can foster inclusive communities for newcomers by creating belonging, providing opportunities, and showing support. They understand the intricacies of submitting code and making connections with other community members. In doing so, they build credibility and gain influence. This experience is invaluable to contributors who want to participate but don't know where to start.

A few years ago, I found myself in this daunting position when I began managing a team active in the Linux kernel community without any experience in kernel myself. The complex code base, expansive email archives, and high-stakes communications intimidated me. When new kernel developers on my team expressed similar feelings, I realized my experience was ubiquitous. For those supporting contributors or those seeking to contribute themselves, the path to entry is not always clear and can feel unattainable.

4 strategies for inclusive leadership

Open source leaders can have an impact by creating pathways for those looking to integrate into the community. The strategies covered in this article can be applied in formal mentoring or coaching relationships but are just as applicable in day-to-day interactions. Seemingly minor exchanges often have the most significant impacts when fostering inclusivity in an environment.

Approach with curiosity

Someone with less experience or coming from a non-traditional background may solve problems in unexpected or different ways. Reacting to those differences with judgment or criticism can create an unsafe environment for learning in communities that often have a steep knowledge curve. For example, long-time contributors to the Linux kernel understand its rich history. This means they have an implied understanding of community decisions and reactions. New contributors must build this knowledge but can only effectively do so if they feel safe taking necessary risks to grow their skill set.

Open source leaders can support newcomers as they learn by approaching them with curiosity. Consider asking questions like, "Can you help me understand why you took this approach?" rather than declaring proposed solutions "right or wrong". Questions open a dialog for continued learning rather than shutting down ideas that are an important aspect of exploration. This process also broadens the leader's viewpoint, who can learn by considering fresh perspectives.

Identify and share learning opportunities

Open source leaders can identify projects suitable for others to gain technical expertise and learn community processes. In creating opportunities for others, leaders also create more opportunities for themselves. This is because they make more time to explore new endeavors while continuing to advance their work through delegation. As leaders grow, their ability to enable others around them to succeed becomes just as critical as their direct contributions.

Knowing that failure is a part of learning, think about identifying projects where newcomers can safely fail without drastic consequences. In the Linux kernel, for example, there are certain parts of the code base where small changes can have disastrous consequences. Consider projects where small wins are achievable to help newcomers build confidence and feel empowered without high stakes. Make these ideas accessible by sharing them at conferences, in email forums, or in any way your community advertises how to become involved.

More open source career advice Open source cheat sheets Linux starter kit for developers 7 questions sysadmins should ask a potential employer before taking a job Resources for IT artchitects Cheat sheet: IT job interviews Demonstrate vulnerability

Having more experience doesn't mean you know everything. More often than not, even the most experienced Linux kernel contributors I've worked with are humbled by new challenges in uncharted subsystems. It's common for community members with less experience to view more experienced community members as having it all figured out. But having experience is about being adept at figuring out what you don't know. If you are in a position of authority and regarded as an expert, demonstrating vulnerability by sharing personal experiences of struggle and perseverance can be encouraging to those dealing with similar feelings.

Vouch for others

Introduce newcomers to your network. Connect them with community members with expertise in areas that pique their interests. Say their name in public forums and call out the excellent work they are doing. As a respected leader, your endorsement can help them build connections and trust within the community.

We can have rich and diverse communities by building in inclusivity. It is my hope that open source leaders will consider these suggestions because those you lift into the community will someday be able to extend a hand to others.

Those you lift into the community will someday be able to extend a hand to others.

Image by:

Opensource.com

Diversity and inclusion Careers What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

What an open license means to gamers

Fri, 02/10/2023 - 16:00
What an open license means to gamers sethkenlon Fri, 02/10/2023 - 03:00

When it was released over 20 years ago, the Open Gaming License 1.0a (OGL) changed the tabletop gaming industry. It enabled publishers to use portions of the D&D rules, verbatim, in their own game books. It guaranteed that the owner of the D&D brand wouldn't sue you for creating and selling modular rules and adventures for the D&D game. And more importantly, it became a promise of collaboration for the gaming community. When you wanted to broadcast to other players that you were willing and eager to share ideas, you included the OGL in your game book, marking your game as open.

Recently, Wizards of the Coast attempted to revoke the Open Gaming License 1.0a, apparently on the grounds that legally the word "perpetual" isn't the same as "irrevocable". Luckily, the gaming community united and defended the license, and in the end Wizards of the Coast acquiesced. As a sign of good faith that came too late for many players, Wizards of the Coast released the System Reference Document (SRD), a subset of the rules published in the hardcover D&D book, into the Creative Commons.

In essence, the fifth edition of the world's first role-playing game (D&D) no longer belongs to Wizards of the Coast. It belongs to its community of players.

As an open source enthusiast, that makes a lot of sense to me, but I admit that for most people it probably seems odd that a corporation would be compelled by its community to surrender ownership of its main product. It's worth noting that D&D probably wouldn't still be around today if it hadn't maintained an open license for nearly 20 years (it wandered away from this during its 4th edition, but hastily course-corrected for the 5th edition). It's an important turn of events, not only gamers, but everyone invested in the idea of open culture and open source.

What open licensing means to gamers

Since the Open Gaming License was released in the early 2000s, there have been hundreds of games and adventures and supplements and source books that were never obligated to use the OGL. They were written from scratch using original language, never borrowing from a System Reference Document in any direct way. Just as there were lots of roleplaying supplements back in the 80s that happened to work with that one game by TSR, these books are independent material that often happen to work with existing systems. But authors chose to use the OGL 1.0a because they recognized that sharing ideas, mechanics, and content was what tabletop roleplaying is all about. It's what it's always been about. Getting together with friends, some old and some new, and inspiring each other. That fellowship extended to the computer screen once digital technology and its infrastructure got good enough to facilitate extended video and voice calls, and to provide emulated tabletops, and so the pool of potential friendships got even bigger. The OGL 1.0a was a document you could copy and paste into your book as a sign that you wanted to collaborate. You were inviting people to your writer's desk and to your gaming table.

[ Related read: Why sysadmins should license their code for open source ]

For a lot of gamers, the Open Gaming License 1.0a also defined the word "open". Being open is different than what we're used to. There's a lot of implied openness out there. Sure, you're allowed to dress up as your favourite Star Wars character—until Disney says otherwise. And maybe you can write some fan fiction based around your favourite TV series—as long as you don't sell it.

But the OGL 1.0a tells you exactly what you can use, and reference documents provide rules you can freely copy and paste into your own work, and then other authors can use your content to build up something cool. And nobody's restricted from selling their work, so it literally meant people could turn their hobby into their day job. And nobody could take that away.

What "open" means for everyone

In the software world, the definition of "open" is ardently protected by organizations like the Open Source Initiative. But for many gamers, that definition was a function of the Open Gaming License 1.0a. And Wizards of the Coast was trying to redefine it.

The term "open" is arguably best defined by the Free Software Foundation, which ironically doesn't use the term "open" and prefers "free" instead. Here's how the FSF defines the term "free":

  • The freedom to run code as you wish, for any purpose.

  • The freedom to study code to understand how it works, and to change it so it works better for you.

  • The freedom to redistribute copies of the original code.

  • The freedom to distribute copies of your modified code to others.

Extrapolating this to culture, you have similar concepts, including the freedom to share, the freedom to change and adapt, and the freedom to receive recognition for the work you've contributed.

3 open licenses gamers need to know

If you're a gamer who's confused about open licensing, don't let the recent attempt to revoke the Open Gaming License fool you. The important thing to remember is that an open community can exist with or without a corporate sponsor. You don't need the legal system to create an open gaming environment, but in today's world you may need a legal system to defend it. And there are licenses out there to help with that.

1. Creative Commons

The Creative Commons (CC) license is an agreement you can apply to something you've created, explicitly granting other people permission to redistribute and maybe even remix your work. The CC license is modular, so you get to decide what permissions you grant. There's an online "quiz" at creativecommons.org/choose to help you pick the right license for your project.

2. GNU Free Documentation

The 90s RPG Dead Earth was published using the GNU Free Documentation license, and it makes sense when you consider that most tabletop games are essentially just a set of rules. Game rules are essentially the "code" of a game written in natural language, so a license intended for technical documentation might make some sense for a game. The GNU Free Documentation license is a modern license, acknowledging that many documents exist only online as wiki pages, or that they may also be licensed under a Creative Commons license. It's also got provisions for the difference between making a personal copy of a book and printing a book by the hundreds.

To find out more about the GNU Free Documentation license, visit gnu.org/licenses/fdl-1.3.txt.

3. Open RPG Creative (ORC) License

The ORC license doesn't yet exist, but it's being formulated, in the open and with public participation, by well-known game publisher Paizo. This license is aiming to replace the OGL with legal text that recognizes the unique needs of a gaming system, in which trademarks and copyrighted material (such as lore, a fictional pantheon, the names of magic spells, and so on) intermingle. The ORC license seeks to make it possible for game publishers to explicitly allow and foster participation and ownership for its community, while also retaining control of the fictional world in which their own version of the game is set. Once completed, the ORC license will be placed in the trust of a non-profit foundation so that no company, in the future, can claim ownership of it with the aim of revoking or altering it.

An open systems reference document

The right license guarantees that others can build upon what you've created. For tabletop role-playing games, the thing being created is a set of rules. In the context of a game, a "rule" is an agreement all the players make with one another to define constraints for what you're "allowed" to do during the game. Of course, it's just a game so you can literally do whatever you want, but the rules define what you can expect as a result.

It's generally acknowledged that game rules aren't subject to copyright. They are seen as community property, but the literal words used to describe those rules are written by someone, and so the author of a rulebook does hold the copyright to their personal expression of a rule. But opening up copyright material for re-use is exactly what licenses were created for, and so a rulebook distributed under an open source license means that you can copy and paste text straight from that rulebook into your own publication without betraying anyone's trust.

Our favorite resources about open source Git cheat sheet Advanced Linux commands cheat sheet Open source alternatives Free online course: RHEL technical overview Check out more cheat sheets

The system-reference-document.org project is working to preserve the D&D 5.1 rules (called the "System Reference Document") and to forge ahead with revisions as developed by the community. Visit the site today to download the open source D&D rules. Read them over, play a few games, and take note of what's confusing and what doesn't seem to work. Think about what you'd change. Maybe you don't like how passive perception works (wouldn't it be better as a codified reaction that could override a surprise effect?), or maybe the character build process is confusing (surely it could be delivered as a linear process of requirements?), or maybe there's something else. Now, thanks to open licensing, and to projects like systems-reference-document.org, you can help change the issues you have with the game.

Open means open

For many of us, open source and open culture are the default. It feels unnecessary and self-important to declare a license for the work we put out into the world. But it's important to remember that not everyone knows your intent. When you release something to the world with the intent for it to be reused, shared, and maybe even remixed, apply an open license to it as a way of reassuring your collaborators-to-be that you support common culture, creativity, and open source. It might seem simple to quickly write your own statement of intent, call it a legal license, and paste it into your document, but if the fight to preserve the OGL has taught us anything, it's that the language of the legal system is not easily learned. Use a trusted license that has a community of stakeholders behind it, so that should it ever be threatened, you have a loud and collective voice to use in its defense.

When you release something to the world with the intent for it to be reused, shared, and maybe even remixed, apply an open license to it as a way of reassuring your collaborators-to-be that you support common culture, creativity, and open source.

Image by:

Opensource.com

Gaming Licensing What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Start developing for WebAssembly with our new guide

Thu, 02/09/2023 - 16:00
Start developing for WebAssembly with our new guide sethkenlon Thu, 02/09/2023 - 03:00

Over the past few decades, the web browser has endured as the most popular cross-platform application. Looking at the browser from a different angle, it is one of the most popular platforms for application delivery. Think of all the websites you use that take the place of activities you used to do with software running on your desktop. You're still using software, but you're accessing it through a browser, and it's running on somebody else's Linux server. In the eternal effort to optimize the software we all use, the world of software development introduced WebAssembly back in 2019 as a way to run compiled code through a web browser. Application performance is better than ever, and the options for coding go far beyond the usual list of PHP, Python, and JavaScript.

Programming and development Red Hat Developers Blog Programming cheat sheets Try for free: Red Hat Learning Subscription eBook: An introduction to programming with Bash Bash shell scripting cheat sheet eBook: Modernizing Enterprise Java An open source developer's guide to building applications A target and a language

One of the powerful but also most confusing things about WebAssembly is that the term "webassembly" refers to both a language and a target. WebAssembly is an assembly language, but not many people choose to write code directly in assembly. Even the assembly language is ultimately converted to a binary format, which is what a computer requires to run code. This binary format is also called WebAssembly. This is good, though, because it means that you can use your choice of languages to write something that's ultimately delivered in WebAssembly, including C, C++, Rust, Javascript, and many others.

The gateway into WebAssembly is Emscripten, an LLVM compiler toolchain that produces WebAssembly from your code.

Install Emscripten

To install Emscripten on your Linux or macOS computer, use Git:

$ git clone \ https://github.com/emscripten-core/emsdk.git

Change directory into the emsdk directory and run the install command:

$ ./emsdk install latest $ ./emsdk activate latest

Everything in the Emscripten toolchain is installed within the emsdk directory and has no effect on the rest of your system. For this reason, before you use emsdk, you must source its environment:

$ source ./emsdk_env.sh

If you plan on using emsdk often, you can also source its environment setup script in .bashrc.

To install Emscripten on Windows, you can run Linux in the WSL environment.

Visit the Emscripten website for more information on installation.

Hello world

Here's a simple "hello world" application in written in C++.

#include <iostream> using namespace std; int main() { cout << "Hello world"; return 0; }

Test it as a standard binary for your system first:

$ g++ hello.cpp -o world $ ./world Hello world

Seeing that it works as expected, use emcc to build it as WebAssembly:

$ emcc hello.cpp -o world.html

Finally, run it with emrun:

$ emrun ./world.html

The emrun utility is a convenience command for local testing. When you host your application on a server, emrun isn't necessary.

Learning more about WebAssembly

Developing for WebAssembly can go in many different directions, depending on what you already know and what you're trying to build. If you know C or C++, then you can write your project using those. If you're learning Rust, then you can use Rust. Even Python code can use the Pyodide module to run as WebAssembly. You have lots of options, and there's no wrong way to start (there's even a COBOL-to-WebAssembly compiler). If you're keen to get started with WebAssembly, download our complimentary eBook.

Developing for WebAssembly can go in many different directions, depending on what you already know and what you're trying to build. Download our new guide to WebAssembly.

Image by:

opensource.com

Programming What to read next A guide to WebAssembly This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Learn Tcl by writing a simple game

Thu, 02/09/2023 - 16:00
Learn Tcl by writing a simple game JamesF Thu, 02/09/2023 - 03:00

My path to Tcl started with a recent need to automate a difficult Java-based command-line configuration utility. I do a bit of automation programming using Ansible, and I occasionally use the expect module. Frankly, I find this module has limited utility for a number of reasons including: difficulty with sequencing identical prompts, capturing values for use in additional steps, limited flexibility with control logic, and so on. Sometimes you can get away with using the shell module instead. But sometimes you hit that ill-behaving and overly complicated command-line interface that seems impossible to automate.

In my case, I was automating the installation of one of my company's programs. The last configuration step could only be done through the command-line, through several ill-formed, repeating prompts and data output that needed capturing. The good old traditional Expect was the only answer. A deep understanding of Tcl is not necessary to use the basics of Expect, but the more you know, the more power you can get from it. This is a topic for a follow-up article. For now, I explore the basic language constructs of Tcl, which include user input, output, variables, conditional evaluation, looping, and simple functions.

Install Tcl

On a Linux system, I use this:

# dnf install tcl # which tclsh /bin/tclsh

On macOS, you can use Homebrew to install the latest Tcl:

$ brew install tcl-tk $ which tclsh /usr/local/bin/tclshGuess the number in Tcl

Start by creating the basic executable script numgame.tcl:

$ touch numgame.tcl $ chmod 755 numgame.tcl

And then start coding in your file headed up by the usual shebang script header:

#!/usr/bin/tclsh

Here are a few quick words about artifacts of Tcl to track along with this article.

The first point is that all of Tcl is considered a series of strings. Variables are generally treated as strings but can switch types and internal representations automatically (something you generally have no visibility into). Functions may interpret their string arguments as numbers ( expr) and are only passed in by value. Strings are usually delineated using double quotes or curly braces. Double quotes allow for variable expansion and escape sequences, and curly braces impose no expansion at all.

The next point is that Tcl statements can be separated by semicolons but usually are not. Statement lines can be split using the backslash character. However, it's typical to enclose multiline statements within curly braces to avoid needing this. Curly braces are just simpler, and the code formatting below reflects this. Curly braces allow for deferred evaluation of strings. A value is passed to a function before Tcl does variable substitution.

Finally, Tcl uses square brackets for command substitution. Anything between the square brackets is sent to a new recursive invocation of the Tcl interpreter for evaluation. This is handy for calling functions in the middle of expressions or for generating parameters for functions.

Procedures

Although not necessary for this game, I start with an example of defining a function in Tcl that you can use later:

proc used_time {start} { return [expr [clock seconds] - $start] }

Using proc sets this up to be a function (or procedure) definition. Next comes the name of the function. This is then followed by a list containing the parameters; in this case 1 parameter {start} and then followed by the function body. Note that the body curly brace starts on this line, it cannot be on the following line. The function returns a value. The returned value is a compound evaluation (square braces) that starts by reading the system clock [clock seconds] and does the math to subtract out the $start parameter.

Setup, logic, and finish

You can add more details to the rest of this game with some initial setup, iterating over the player's guesses, and then printing results when completed:

set num [expr round(rand()*100)] set starttime [clock seconds] set guess -1 set count 0 puts "Guess a number between 1 and 100" while { $guess != $num } { incr count puts -nonewline "==> " flush stdout gets stdin guess if { $guess < $num } { puts "Too small, try again" } elseif { $guess > $num } { puts "Too large, try again" } else { puts "That's right!" } } set used [used_time $starttime] puts "You guessed value $num after $count tries and $used elapsed seconds"

Programming and development Red Hat Developers Blog Programming cheat sheets Try for free: Red Hat Learning Subscription eBook: An introduction to programming with Bash Bash shell scripting cheat sheet eBook: Modernizing Enterprise Java An open source developer's guide to building applications

The first set statements establish variables. The first two evaluate expressions to discern a random number between 1 and 100, and the next one saves the system clock start time.

The puts and gets command are used for output to and input from the player. The puts I've used imply standard out for output. The gets needs the input channel to be defined, so this code specifies stdin as the source for terminal input from the user.

The flush stdout command is needed when puts omits the end-of-line termination because Tcl buffers output and it might not get displayed before the next I/O is needed.

From there the while statement illustrates the looping control structure and conditional logic needed to give the player feedback and eventually end the loop.

The final set command calls our function to calculate elapsed seconds for gameplay, followed by the collected stats to end the game.

Play it! $ ./numgame.tcl Guess a number between 1 and 100 ==> 100 Too large, try again ==> 50 Too large, try again ==> 25 Too large, try again ==> 12 Too large, try again ==> 6 Too large, try again ==> 3 That's right! You guessed value 3 after 6 tries and 20 elapsed seconds Continue learning

When I started this exercise, I doubted just how useful going back to a late 1990s fad language would be to me. Along the way, I found a few things about Tcl that I really enjoyed — my favorite being the square bracket command evaluation. It just seems so much easier to read and use than many other languages that overuse complicated closure structures. What I thought was a dead language was actually still thriving and supported on several platforms. I learned a few new skills and grew an appreciation for this venerable language.

Check out the official site over at https://www.tcl-lang.org. You can find references to the latest source, binary distributions, forums, docs, and information on conferences that are still ongoing.

Explore the basic language constructs of Tcl, which include user input, output, variables, conditional evaluation, looping, and simple functions.

Image by:

Opensource.com

Programming What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

3 types of leadership for open organizations

Thu, 02/09/2023 - 16:00
3 types of leadership for open organizations bbehrens Thu, 02/09/2023 - 03:00

In the classic movie Born Yesterday, a crime boss repeatedly demonstrates his leadership style by bellowing, "Do what I'm tellin' ya!" in a loud, threatening voice. It's entertaining in a comedy, but it would be a recipe for failure and getting ignored in an open organization.

In this article, I review forms of leadership that can be effective in an open organization. Remember that these leadership forms do not exist in a vacuum or silos. To be an effective manager, you want to mix and match techniques from each leadership style based on the requirements of a situation.

These three approaches to leadership are helpful for open organizations.

Servant leadership

There is a saying that politicians want to get elected either to be something or to do something. This adage applies to any type of leader. Some leaders simply want to be in command. While all leaders are ambitious, for this type of leader, satisfying their ambition is the primary goal. The acquisition of power is an end unto itself; once they have it, they may be uninterested in using it to solve problems or build something. Anything that the organization achieves looks like a personal triumph to them.

By contrast, when you're a servant leader, you see your leadership role as a means to serve people. In the political world, you would view public service as not a cliche but as an opportunity to help the public. As a servant leader, you work to improve things for the people you lead and are primarily concerned about the welfare of those around you.

Servant leadership is also contagious. By focusing on the welfare and development of the people you lead, you're growing the next generation of servant leaders. As a servant leader, you're not interested in taking all the credit. For example, when legendary baseball manager Casey Stengel was congratulated for winning a league championship, he famously remarked, "I couldn't have done it without my players." One of his greatest skills as a manager was maximizing each player's contributions to benefit the whole team.

Quiet leadership

For the past several years, we've been living in the age of the celebrity CEO. They are easy to recognize: They are brash and loud, they promote themselves constantly, and they act as if they know the answer to every problem. They attempt to dominate every interaction, want to be the center of attention, and often lead by telling others what to do. Alice Roosevelt Longworth described her father, US President Theodore Roosevelt, as someone who "wanted to be the corpse at every funeral, the bride at every wedding, and the baby at every christening." Roosevelt was an effective leader who did extraordinary things, such as starting the US National Park Service and building the Panama Canal, but he was anything but quiet.

In contrast, when you're a quiet leader, you lead by example. You don't fixate on problems; instead, you maintain a positive attitude and let your actions speak for themselves. You focus on what can be done. You lead by solving problems and by providing an example to your team. When faced with unexpected issues, the quiet leader doesn't spend time complaining but looks for solutions and implements them.

Open leadership

As a servant leader, you work to assist the members of your organization in growing into leaders. Quiet leaders lead by example. Servant leaders and quiet leaders do not act in an autocratic manner. Open leaders combine many of these characteristics.

An open leader is also not a top-down autocratic leader. As an open leader, you succeed by creating organizations in which teams can thrive. In other words, as an open leader, you create a framework or environment in which your organization can achieve the following goals according to The Open Organization Definition:

  • Greater agility: In an open organization, all team members have a clear understanding of the organization's goals and can, therefore, better work together to achieve those goals.
     
  • Faster innovation: In an open organization, ideas are heard (and reviewed and argued over) regardless of their origin. Ideas are not imposed on the organization by its leaders.
     
  • Increased engagement: Because members of the organization can contribute to decisions about the organization's direction, they have a sense of ownership for the team's goals.

The Open Organization defines the following five characteristics as basic tenants of open organizations:

  • Transparency: The organization's decision-making process is open, as are all supporting project resources. The team is never surprised by decisions made in isolation.
     
  • Inclusivity: All team members are included in discussions and reviews. Rules and protocols are established to ensure that all viewpoints are reviewed and respected.
     
  • Adaptability: Feedback is requested and accepted on an ongoing basis. The team continually adjusts its future actions based on results and inputs.
     
  • Collaboration: Team members work together from the start of a project or task. Work is not performed in isolation or in silos and then presented to the rest of the team for input.
     
  • Community: Team members have shared values regarding how the organization functions. Team leaders model these values. All team members are encouraged to make contributions to the team.
Putting leadership styles to work

How can you, as an open leader, incorporate the characteristics of servant and quiet leadership?

In an open organization, to support an inclusive community, you function as a mentor. Just as a servant leader acts to teach and cultivate future servant leaders, you must walk the walk, leading by example, ensuring transparency and collaboration, and operating according to shared values.

Open Organization resources Download resources Join the community What is an open organization? How open is your organization?

How can a quiet leader contribute to an open organization? Open organizations tend to be, for lack of a better word, noisy. Communication and collaboration in an open organization are constant and can sometimes be overwhelming to people not accustomed to it. The ownership felt by members of open organizations can result in contentious and passionate discussions and disagreements.

Quiet leaders with a positive outlook tend to see paths forward through seemingly contradictory viewpoints. Amid these discussions, a quiet leader cuts through the noise. As a calming influence on an open organization, a quiet leader can help people get past differences while driving solutions.

Further resources

Servant leaders, quiet leaders, and open leaders have traits useful to open organizations.

Image by:

Opensource.com

The Open Organization What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. 30 points

Len has been happily employed at Red Hat since 2006 and is a Senior QE Manager concentrating on quality and automation for open source middleware and cloud products including managed services. Len is also an avid writer and blogger, photographer, food pantry volunteer, and golf addict. 

Open Enthusiast Author Register or Login to post a comment.

Improve your coding skills with temporal values in MySQL

Wed, 02/08/2023 - 16:00
Improve your coding skills with temporal values in MySQL HunterC Wed, 02/08/2023 - 03:00

Both new and experienced users of the popular MySQL database system can often get confused about how temporal values are handled by the database. Sometimes users don't bother learning much about temporal value data types. This may be because they think there isn't much to know about them. A date is a date, right? Well, not always. Taking a few minutes to learn how MySQL stores and displays dates and times is beneficial. Learning how to best take advantage of the temporal values in your database tables can help make you a better coder.

MySQL temporal data types

When you are building your tables in MySQL, you choose the proper data type which most efficiently holds the data you intend to insert into the table (INT, FLOAT, CHAR,and so on). MySQL provides you with five data types for temporal values. They are: DATE, TIME, DATETIME, TIMESTAMP, and YEAR.

MySQL uses the ISO 8601 format to store the values in the following formats:

  • DATE  YYYY-MM-DD
  • TIME   HH:MM:SS
  • TIMESTAMP YYYY-MM-DD  HH:MM:SS
  • YEAR YYYY
Datetime compared to Timestamp

You may have noticed that the DATETIME and TIMESTAMP data types hold the same data. You might wonder if there are any differences between the two. There are differences.

First, the range of dates that can be used differ. DATETIME can hold dates between 1000-01-01 00:00:00 and 9999-12-31 23:59:59, whereas TIMESTAMP has a much more limited range of 1970-01-01 00:00:01 to 2038-01-19 03:14:07 UTC.

Second, while both data types allow you to auto_initialize or auto_update their respective values (with DEFAULT CURRENT_TIMESTAMP and ON UPDATE CURRENT_TIMESTAMP respectively), doing so was not available for DATETIME values until version 5.6.5. You can use one of the MySQL synonyms for CURRENT_TIMESTAMP if you choose, such as NOW() or LOCALTIME().

[ Download now: MariaDB and MySQL cheat sheet ]

If you use ON UPDATE CURENT_TIMESTAMP (or one of its synonyms) for a DATETIME value, but do not use the DEFAULT CURRENT_TIMESTAMP clause, then the column will default to NULL. This happens unless you include NOT NULL in the table definition, in which case it defaults to zero.

Another important thing to keep in mind is that although normally neither a DATETIME nor a TIMESTAMP column have a default value unless you declare one, there is one exception to this rule. The first TIMESTAMP column in your table is implicitly created with both DEFAULT CURRENT_TIMESTAMP and ON UPDATE CURRENT_TIMESTAMP clauses if neither is specified and if the variable explicit_defaults_for_timestamp is disabled.

To check this variable's status, run:

mysql> show variables like 'explicit_default%';

If you want to turn it on or off, run this code, using 0 for off and 1 for on:

mysql> set explicit_defaults_for_timestamp = 0; Time

MySQL's TIME data type may seem simple enough, but there are a few things that a good programmer should keep in mind.

First, be aware that although time is often thought of as the time of day, it is in fact elapsed time. In other words, it can be a negative value or can be greater than 23:59:59. A TIME value in MySQL can be in the range of -838:59:59 to 838:59:59.

Also, if you abbreviate a time value, MySQL interprets it differently depending on whether you use a colon. For example, the value 10:34 is seen by MySQL as 10:34:00. That is, 34 minutes past ten o'clock. But if you leave out the colon, 1034', MySQL sees that as 00:10:34. That is, ten minutes and 34 seconds.

Finally, you should know that TIME values (as well as the time portion of DATETIME and TIMESTAMP columns) can, as of version 5.6.4, take a fractional unit. To use it, add an integer (max value six) in parentheses at the end of the data type definition.

time_column TIME(2)Time zones

Time zone changes not only cause confusion and fatigue in the real world, but have also been known to cause problems in database systems. The earth is divided into 24 separate time zones which usually change with every 15 degrees of longitude. I say usually because some nations choose to do things differently. China, for example, operates under a single time zone instead of the five that would be expected.

The question is, how do you handle users of a database system who are in different time zones. Fortunately, MySQL doesn't make this too difficult.

To check your session time zone, run:

mysql> select @@session.time_zone;

If it says System, that means that it is using the timezone set in your my.cnf configuration file. If you are running your MsSQL server on your local computer, this is probably what you'll get, and you don't need to make any changes.

If you would like to change your session's time zone, run a command such as:

mysql> set time_zone = '-05:00';

This sets your time zone to five hours behind UTC. (US/Eastern).

Getting the day of the week

To follow along with the code in the rest of this tutorial, you should create a table with date values on your system. For example:

mysql> create table test ( row_id smallint not null auto_increment primary key, the_date date not null);

Then insert some random dates into the table using the ISO 8601 format, such as:

mysql> insert into test (the_date) VALUES ('2022-01-05');

I put four rows of date values in my test table, but put as few or as many as you'd like.

Sometimes you may wish to know what day of the week a particular day happened to be. MySQL gives you a few options.

The first, and perhaps most obvious way, is to use the DAYNAME() function. Using the example table, DAYNAME() tells you the day of the week for each of the dates:

mysql> SELECT the_date, DAYNAME(the_date) FROM test ; +------------+-------------------------------+ | the_date | DAYNAME(the_date) | +------------+-------------------------------+ | 2021-11-02 | Tuesday | | 2022-01-05 | Wednesday | | 2022-05-03 | Tuesday | | 2023-01-13 | Friday | +------------+-------------------------------+ 4 rows in set (0.00 sec)

The other two methods for getting the day of the week return integer values instead of the name of the day. They are WEEKDAY() and DAYOFWEEK(). They both return numbers, but they do not return the same number. The WEEKDAY() function returns a number from 0 to 6, with 0 being Monday and 6 being Sunday. On the other hand, DAYOFWEEK() returns a number from 1 to 7, with 1 being Sunday and 7 being Saturday.

mysql> SELECT the_date, DAYNAME(the_date), WEEKDAY(the_date), DAYOFWEEK(the_date) FROM test; +------------+------------------+------------------+--------------------+ | the_date | DAYNAME(the_date)| WEEKDAY(the_date)| DAYOFWEEK(the_date)| | 2021-11-02 | Tuesday | 1 | 3 | | 2022-01-05 | Wednesday | 2 | 4 | | 2022-05-03 | Tuesday | 1 | 3 | | 2023-01-13 | Friday | 4 | 6 | +------------+------------------+------------------+--------------------+ 4 rows in set (0.00 sec)When you only want part of the date

Sometimes you may have a date stored in your MySQL table, but you only wish to access a portion of the date. This is no problem.

There are several conveniently-named functions in MySQL that allow for easy access to a particular portion of a date object. To show just a few examples:

mysql> SELECT the_date, YEAR(the_date), MONTHNAME(the_date),  DAYOFMONTH(the_date) FROM test ; +-----------+---------------+-------------------+---------------------+ | the_date | YEAR(the_date)|MONTHNAME(the_date)| DAYOFMONTH(the_date)| +-----------+---------------+-------------------+---------------------+ | 2021-11-02| 2021 | November | 2 | | 2022-01-05| 2022 | January | 5 | | 2022-05-03| 2022 | May | 3 | | 2023-01-13| 2023 | January | 13 | +-----------+---------------+-------------------+---------------------+ 4 rows in set (0.00 sec)

MySQL also allows you to use the EXTRACT() function to access a portion of a date. The arguments you provide to the function are a unit specifier (be sure that it's singular), FROM, and the column name. So, to get just the year from our test table, you could write:

mysql> SELECT EXTRACT(YEAR FROM the_date) FROM test; +----------------------------------------------+ | EXTRACT(YEAR FROM the_date) | +----------------------------------------------+ | 2021 | | 2022 | | 2022 | | 2023 | +----------------------------------------------+ 4 rows in set (0.01 sec)Inserting and reading dates with different formats

As mentioned earlier, MySQL stores date and time values using the ISO 8601 format. But what if you want to store date and time values another way, such as MM-DD-YYYY for dates? Well, first off, don't try. MySQL stores dates and times in the 8601 format and that's the way it is. Don't try to change that. However, that doesn't mean you have to convert your data to that particular format before you enter it into your database, or that you cannot display the data in whatever format you desire.

If you would like to enter a date into your table that is formatted in a non-ISO way, you can use STR_TO_DATE(). The first argument is the string value of the date you want to store in your database. The second argument is the formatting string which lets MySQL know how the date is organized. Let's look at a quick example, and then I'll delve a little deeper into what that odd-looking formatting string is all about.

mysql> insert into test (the_date) values (str_to_date('January 13, 2023','%M %d, %Y')); Query OK, 1 row affected (0.00 sec)

Our favorite resources about open source Git cheat sheet Advanced Linux commands cheat sheet Open source alternatives Free online course: RHEL technical overview Check out more cheat sheets

You put the formatting string in quotes, and precede each of the special characters with a percent sign. The format sequence in the above code tells MySQL that my date consists of a full month name (%M), followed by a two-digit day (%d), then a comma, and finally a four-digit year (%Y). Note that capitalization matters.

Some of the other commonly used formatting string characters are:

  • %b abbreviated month name (example: Jan)
  • %c numeric month (example: 1)
  • %W name of day (example: Saturday)
  • %a abbreviated name of day (example: Sat)
  • %T 24-hour time (example: 22:01:22)
  • %r 12-hour time with AM/PM (example: 10:01:22 PM)
  • %y 2-digit year (example: 23)

Note that for the 2-digit year (%y) the range of years is 1970 to 2069. So numbers from 70 through 99 are assumed 20th century, while numbers from 00 to 69 are assumed to be 21st century.

If you have a date stored in your database, and you would like to display it using a different format, you can use the DATE_FORMAT() function:

mysql> SELECT DATE_FORMAT(the_date, '%W, %b. %d, %y') FROM test; +-----------------------------------------+ | DATE_FORMAT(the_date, '%W, %b. %d, %y') | +-----------------------------------------+ | Tuesday, Nov. 02, 21 | | Wednesday, Jan. 05, 22 | | Tuesday, May. 03, 22 | | Friday, Jan. 13, 23 | +-----------------------------------------+ 4 rows in set (0.00 sec)Conclusion

This tutorial should give you a helpful overview of date and time values in MySQL. I hope that this article has taught you something new that allows you to have both better control and a greater understanding into how your MySQL database handles temporal values.

This overview of date and time values in MySQL will help you take advantage of temporal values in your database tables.

Databases Programming What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Kubernetes migration made simple with Konveyor Move2Kube

Wed, 02/08/2023 - 16:00
Kubernetes migration made simple with Konveyor Move2Kube Mehant Kammakomati Wed, 02/08/2023 - 03:00

Konveyor Move2Kube assists developers in migrating projects from platforms such as Cloud Foundry and Docker swarm to Kubernetes and OpenShift. Move2Kube analyzes your application's source code and generates Infrastructure-as-Code (IaC) artifacts such as Kubernetes YAMLs, Helm charts, Tekton pipelines, and so on.

Image by:

(Mehant Kammakomati, CC BY-SA 4.0)

Powering Move2Kube is a transformer framework that enables multiple small transformers to be chained together to transform the artifacts completely.

Image by:

(Mehant Kammakomati, CC BY-SA 4.0)

Many different transformers get involved when transforming a Java or Node.js project to create all the destination artifacts. This allows for the reuse of transformers in various end-to-end flows. Each transformer is capable of performing multiple activities.

Image by:

(Mehant Kammakomati, CC BY-SA 4.0)

Move2Kube

You can use Move2Kube as a terminal command or as a web app. Its core functionality includes planning and transformation. In the planning phase, Move2Kube analyzes artifacts to identify the services involved. In the transformation phase, it transforms those services into destination artifacts.

The terminal command is a single binary, which you can download and install. Move2Kube also provides a helper script to download and place the binary in your local filesystem:

curl \ https://raw.githubusercontent.com/konveyor/move2kube/main/scripts/install.sh \ -o move2kube_install.sh

Look through the script to ensure its install method aligns with your preference, and then run it:

$ sh ./move2kube_install.sh

To use the command, just run it on a directory containing the application source code:

$ move2kube transform -s ./srcTransform an enterprise scale application

Move2Kube can be used to replatform a real-world enterprise application. There's a demo enterprise app included in the Move2Kube git repository to demonstrate the workflow. This demo app is similar to a typical real-world application with CRUD operations and a multi-tier architecture.

To try it out, download the source code for the enterprise app:

$ curl https://move2kube.konveyor.io/scripts/download.sh \ | bash -s -- -d samples/enterprise-app/src -r move2kube-demos

The source code for the enterprise app is in the src directory.

First, use the move2kube transform command:

$ move2kube transform -s ./src

After running the command, look in the myproject folder. There are new directories, including deploy, scripts, and source. The deploy directory contains all IaC artifacts, such as Kubernetes YAML files, Tekton pipelines, Helm charts, Knative files, compose files, and OpenShift files. The scripts directory contains shell scripts to build and push container images to the registry of your choice. Finally, the source directory contains the source code and Dockerfiles.

Capabilities provided by Move2Kube

Move2Kube has a powerful QA Engine. Transformers can get input from a user using this engine. Transformers receive the input as user interaction through a terminal, a web interface, a REST API, or as a configuration file.

For instance, in the demo enterprise app, the Move2Kube QA engine might ask which port the frontend app should listen on, which container registry should be used to store images, or what ingress host should be used.

To run an app in a non-interactive mode, use the --qa-skip flag. This option causes Move2Kube to use default answers:

$ move2kube transform -s ./src --qa-skip

If you want to answer the questions from a configuration file, use the -f option:

$ move2kube transform -s ./src -f ./m2kconfig.yaml

A list of all answers used for the run is captured as a config in the m2kconfig.yaml file.

Customization capability

Move2Kube includes several transformers ready for use, and it allows users to write new transformers. Move2Kube exposes all the internal transformers' capabilities to be leveraged for writing custom transformers. These capabilities include a QA engine, extensive templating enabled by Golang templates, and isolation. Custom transformers behave exactly the same as built-in transformers. The two types can be chained together to achieve an end-to-end transformation.

Move2Kube generates artifacts you can customize to comply with organizational best practices and policies. You can direct the Move2Kube tool to customizations using the -c or --customization option.

You can create customizations using three different methods:

  1. Configure built-in transformers: Configure a built-in transformer to behave differently. For example, you can modify the parameterization transformer to parameterize different values depending on the organization's needs.
  2. Starlark-based transformers: Write a complete transformer in a Python-like language, Starlark. For example, you could use a Starlark-based transformer to add custom annotations to Kubernetes YAML files.
  3. Executable transformers: Write a complete transformer in a language of your choice and allow Move2Kube to execute it along with the other transformers. For example, generate custom Helm charts to add custom files and directories in specific locations.
Parameterization and customization capability

Move2Kube allows users to parameterize custom fields in the target platform artifacts, such as Helm charts. For instance, parameterizing the number of replicas in a Helm chart:

apiVersion: apps/v1 kind: Deployment metadata: annotations: move2kube.konveyor.io/service.expose: "true" creationTimestamp: null labels: move2kube.konveyor.io/service: orders name: orders spec: progressDeadlineSeconds: 600 replicas: {{ index .Values "common" "replicas" }}

Move2Kube also allows you to customize output artifacts. For instance, add a custom annotation to the Ingress YAML file:

apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: haproxy creationTimestamp: null labels: move2kube.konveyor.io/service: myproject name: myproject

More on Kubernetes What is Kubernetes? Free online course: Containers, Kubernetes and Red Hat OpenShift technical over… Test drive OpenShift hands-on An introduction to enterprise Kubernetes How to explain Kubernetes in plain terms eBook: Running Kubernetes on your Raspberry Pi homelab Kubernetes cheat sheet eBook: A guide to Kubernetes for SREs and sysadmins Latest Kubernetes articles Move2Kube case studies

Mov2Kube has been widely adopted in the industry and open-source communities. Here are some case studies where replatforming using Move2Kube has shown considerable improvement over manual effort.

  • InsApp
    • Language Stack: Java (springboot), Angular JS UI
    • Source Platform: Docker Swarm
    • Number of Services: 100
    • Manual Effort: 56 days
    • Move2Kube Effort: 6 days
    • In-Built Transformers Invoked: 6
    • Number of External Transformers: 0
  • AA case study
    • Language Stack: Java (springboot), Angular JS UI
    • Source Platform: Cloud-foundary
    • Number of Services: 3
    • Manual Effort: 2 days
    • Move2Kube Effort: 15 minutes
    • In-Built Transformers Invoked: 14
    • Number of External Transformers: 0
  • CP case study
    • Language Stack: Python
    • Source Platform: ECS Fargate
    • Number of Services: 7
    • Manual Effort: 12 days
    • Move2Kube Effort: 1 day
    • In-Built Transformers Invoked: 13
    • Number of External Transformers: 0
  • MFA case study
    • Language Stack: .NET Silverlight UI
    • Source Platform: Bare-metal/VM
    • Number of Services: 4
    • Manual Effort: 9 days
    • Move2Kube Effort: 5 hours
    • In-Built Transformers Invoked: 14
    • Number of External Transformers: 1 (custom dependencies)
  • TMP case study
    • Language Stack: Java (springboot)
    • Source Platform: Cloud-foundry
    • Number of Services: 24
    • Manual Effort: 25 days
    • Move2Kube Effort: 2.25 days
    • In-Built Transformers Invoked: 15
    • Number of External Transformers: 1 (custom directories)

Data source: Seshadri, Padmanabha V., Harikrishnan Balagopal, Akash Nayak, Ashok Pon Kumar, and Pablo Loyola. "Konveyor Move2Kube: A Framework For Automated Application Replatforming." In 2022 IEEE 15th International Conference on Cloud Computing (CLOUD), pp. 115-124. IEEE, 2022.

Learn more about Konveyor Move2Kube

Visit the Move2Kube site to learn more about replatforming using Konveyor Move2Kube.

Konveyor accelerates the process of replatforming to Kubernetes by analyzing source artifacts.

Image by:

Maersk Line. CC SA-BY 4.0

Kubernetes Containers What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

A brief history of LibreOffice

Tue, 02/07/2023 - 16:00
A brief history of LibreOffice italovignoli Tue, 02/07/2023 - 03:00

In early 2009, OpenOffice.org was the main competitor to Microsoft Office in the individual office productivity suites market. The popular open source office suite's community looked forward to a November conference in Orvieto, Italy. Things were going well, and the future looked bright.

And then, in April of that year, Oracle announced its plans to acquire Sun Microsystems.

Personally, I knew it was bad news for OpenOffice.Org. Oracle had no interest in the open source suite, and I felt confident it would abandon the project. Of course, I hoped to be proved wrong at the upcoming conference. Instead, a single representative from Oracle, with no budget to speak of, arrived in Orvieto and talked vaguely about monetization and re-branding. I felt that my worst fears were confirmed, and my fellow community members agreed.

The community returned home from Orvieto that year and resolved to take action. The time had finally come to turn into reality what the OpenOffice.Org project had promised. We were determined to create an independent foundation to manage the project's assets and promote the development of the suite under the umbrella of the community. OpenOffice.org would no longer belong to a company but to its users and individual contributors.

Building the foundation

At the time, the OpenOffice.org project had a presence on every continent, with language communities helping to localize and promote it. The four most important:

  • German: The software was born in Germany, and StarDivision was based in Hamburg, so there was a natural link between the group of developers and German-speaking supporters.
  • French: The government supported the open source software.
  • Italian: The group to which I belonged.
  • Brazilian

At the beginning of 2010, at the initiative of the French and German language communities, the most active volunteers—together with some independent and SUSE developers—started working on a fork project. The aim was to launch an alternative project involving both the global community and the companies invested in OpenOffice.org.

I have over 30 years of experience working in international business and consultancy agencies. The project brought me in to manage the marketing and communication strategy.

In the months that followed, activity became increasingly hectic. There was a weekly teleconference meeting, as the news coming in from Star Division (the department responsible for OpenOffice.org) was increasingly negative.

Even with the dissolution of OpenOffice.org seemingly imminent, a conference in Budapest was confirmed by the publication of a CFP (Call for Papers). Of course, the fork project members also did nothing different from previous years. They presented their talk proposals and made travel plans.

A safe place for documents

At the beginning of the summer, the fork was almost ready. Our group met in Budapest to gauge the situation from the OpenOffice.org side and for a first face-to-face organizational meeting.

The Budapest conference ran smoothly, with meetings, keynotes, and technical sessions taking place over the three-day event. Everything seemed more or less normal.

Everything was not normal.

Some attendees were a little suspicious when several leading figures failed to attend the conference's main social event, an overnight cruise on the Danube. We didn't participate in this event because we were meeting in a restaurant to discuss the final details of a new foundation. There was a lot to get right. We had to determine an announcement date and the composition of the Steering Committee that would coordinate the tasks required to bring the foundation to life.

LibreOffice

The three weeks between the conference and the announcement of LibreOffice were hectic. I prepared the launch strategy and the text of the press release. The developers prepared the software. The application's name had just been decided a few days earlier during a teleconference (which I'd joined from Grosseto, where I was attending the Italian open source software community meeting).

On September 28, 2010, I distributed the press release announcing The Document Foundation and LibreOffice to a global mailing list of about 250 journalists, which I painstakingly put together using input from the public relations agencies where I worked.

Here is the release:

The community of volunteers developing and promoting OpenOffice.Org announces an independent foundation to drive the further growth of the project. The foundation will be the cornerstone of a new ecosystem where individuals and organisations can contribute to and benefit from the availability of a truly free office suite. It will generate increased competition and choice for the benefit of customers and drive innovation in the office suite market. From now on the OpenOffice.Org community will be known as The Document Foundation.

We invited Oracle to become a member of the foundation and donate the brand the community had grown during the previous ten years. Pending the decision, we chose the brand LibreOffice for the software going forward.

Reactions to the announcement from the press were very positive. On the other hand, companies and analysts tended to be suspicious of an office suite governed by a community, an entity they never fully understood because of its flat, meritocratic organization.

In the two weeks following the announcement, 80 new developers joined the project, disproving the predictions of those who considered it unrealistic to launch a fork relying only on SUSE and Red Hat developers. Unsurprisingly, most of the language communities switched to LibreOffice.

LibreOffice is built from the source code of OpenOffice.org. The new functionalities are integrated in the source code of Go-OO and not on OOo.

For this reason, the first version of LibreOffice—announced on January 25, 2011—was 3.3 to maintain consistency with OpenOffice.org. This was useful for users who had migrated to the new suite since the first version. The software was still a little immature due to significant technical debt that had to be accounted for. This caused problems and instability that would largely be corrected through code cleaning and refactoring throughout the 3.x and 4.x versions. By versions 5.x and 6.x, the source code was considered stable, which allowed the user interface to be improved and the development of mobile and cloud versions.

In the spring of 2011, Oracle transferred the OpenOffice.org source code to the Apache Software Foundation. The project lasted for three years. The last new version was nearly a decade ago.

More open source alternatives Open source project management tools Trello alternatives Linux video editors Open source alternatives to Photoshop List of open source alternatives Latest articles about open source alternatives The future is open

The formation process of The Document Foundation ended in early 2012, with registration by the Berlin authorities on February 17, 2012. This was a lengthy process because the founders wanted volunteer members of the project also to be members of the foundation based on contributions. This detail hadn't been foreseen for foundations under German law, so it required several revisions of statutes to comply with this condition.

The foundation's first two activities were the membership committee's election. This is the structure that decides on the transition from mere volunteer to member of The Document Foundation on the basis of contributions. There are five members and three deputies. Finally, there's a Board of Directors, which steers the foundation administratively and strategically, consisting of seven members and three deputies.

At the end of 2012, the foundation hired its first employee. This employee was Florian Effenberger, who was later promoted to executive director. Today, the team has a dozen members who take care of day-to-day activities such as coordinating projects, administration, network infrastructure management, software releases, mentoring of new developers, coordination of quality assurance, user interface evolution, and marketing and communications.

Right now, the foundation is looking for developers to handle tasks that do not fit the objectives of enterprise customers, such as RTL language management and accessibility. These features aren't developed by the companies in the LibreOffice ecosystem, which offer them feature development services, Level 3 support, and Long Term Support versions of the software optimized for enterprise needs.

More than 12 years after the announcement of LibreOffice and The Document Foundation, we can say that we have achieved our goal of developing an independent free and open source (FOSS) project. Our project is based on an extended community of individual volunteers and companies contributing according to their abilities. These participants help create the unmatched free office suite and support open standards by adopting and evolving the only true standard office document format on the market (Open Document Format, or ODF) while also ensuring excellent compatibility with the proprietary OOXML format.

The sustainability of this model is a day-to-day problem. There's severe competition from big tech firms. We're always searching for a balance between those who would like everything to be cost-free and those who would like each user to contribute according to their ability. No matter what, though, LibreOffice is an open source office suite, providing added value above and beyond its competition.

Try LibreOffice. Donate. Support it at home and work. Tell your friends about it. LibreOffice is the open source office solution that ensures you always have access to your data and control over your creativity.

The origin story of LibreOffice, the open source office solution that ensures you always have access to your data and control over your creativity.

Image by:

Opensource.com

LibreOffice Alternatives What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How the Gherkin language bridges the gap between customers and developers

Tue, 02/07/2023 - 16:00
How the Gherkin language bridges the gap between customers and developers David Blackwood Tue, 02/07/2023 - 03:00

Communicating with software developers can often be a burdensome task, especially when people lack technical knowledge and technical vocabulary. This is why project managers often use user stories and the versatile system metaphor.

You can assist communication further by utilizing technology designed to facilitate discussions between a project's stakeholders and developers.

The Cucumber framework

Cucumber is an open source framework that enables the creation of automated software tests using an easy-to-write and common language. It's based on the concept of behavior-driven development (BDD), which dictates that creating software should define how a user wants an application to behave when specific conditions are true.

The Cucumber framework isn't "technology" in the modern sense. It's not a collection of bits and bytes. Instead, it's a way of writing in natural language (English, in the case of this article, but so far Gherkin has been translated to over 70 languages). When using the Cucumber framework, you aren't expected to know how to read or write code. You only need to be able to write down ideas you have about how you work. You should also document how you want technology to work for you, using a set of specific terms and guidelines.

What is the Gherkin language?

Cucumber uses Gherkin as a means to define use cases. It's primarily used to generate unambiguous project requirements. In other words, its purpose is to allow users to describe precisely what they require software to do, leaving no room for interpretation or exception. It helps you think through the process of a transaction with technology and then helps you write it down in a form that translates into programmer logic.

Here's an example:

Feature: The Current Account Holder withdraws money Scenario: The account in question is not lacking in funds Given that the account balance is £200 And the debit card is valid And the cash machine contains enough money When the Current Account Holder requests £50 Then the cash machine dispenses £50 And the account balance is £150 And the debit card is returned

As you can see, this is a highly specific scenario in which an imaginary user requests £50, and the ATM provides £50 and adjusts the user's account balance accordingly. This scenario is just one part of an ATM's purpose, and it only represents a specific component of a person's interaction with a cash machine. When a programmer is given the task to program the machine to respond to a user request, this clearly demonstrates what factors are involved.

What are Gherkin keywords?

The Gherkin syntax makes use of five indispensable statements describing the actions needed to perform a task:

  • Feature: denotes a high-level description of any given software function

  • Scenario: describes a concrete example

  • Given: explains the initial context of the system

  • When: specifies an event or action

  • Then: describes an expected outcome, or a result

  • And (or but): increases text fluidity

By making use of these simple keywords, customers, analysts, testers, and software programmers are empowered to exchange ideas with terminology that's recognizable by all.

More DevOps resources What is DevOps? The ultimate DevOps hiring guide DevOps monitoring tools guide A guide to implementing DevSecOps Download the DevOps glossary eBook: Ansible for DevOps Latest DevOps articles Executable requirements and automated testing

Even better, Gherkin requirements are also executable. This is done by mapping each and every keyword to its intended (and clearly stated) functionality. So, to keep with the example above, anything already implemented could automatically be displayed in green:

When the Current Account Holder requests £50* Then the cash machine dispenses £50* And the account balance is £150 And the debit card is returned

By extension, Gherkin enables developers to translate requirements into testable code. In practice, you can use specific phrases to check in on your software solutions! If your current code isn't working properly, or a new change has accidentally caused a software error (or two or three) then you can easily pinpoint problems before proceeding to repair them.

Conclusion

Thanks to the Gherkin syntax, your customers will no longer be in a pickle. You can bridge the divide between businesses and developers and deliver outstanding products with greater confidence than ever before.

Find out more about Gherkin by visiting the Cucumber website or its Git repository.

The Gherkin syntax helps you think through the process of a transaction with technology and then helps you write it down in a form that translates into programmer logic.

Image by:

Melissa Hogan, CC BY-SA 4.0, via Wikimedia Commons

DevOps What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Wordsmith on the Linux command line with dict

Mon, 02/06/2023 - 16:00
Wordsmith on the Linux command line with dict dboth Mon, 02/06/2023 - 03:00

As a writer, I frequently need to determine the correct spelling or definition of words. I also need to use a thesaurus to find alternate words that may have a somewhat different connotation than the one I might otherwise use. Because I frequently use the Linux command line and text-mode tools to do much of my work, it makes sense to use a command line dictionary.

I really like using the command line for a number of reasons, the primary one being that it is more efficient for me. It is also far more comprehensive than any one or multiple physical paper dictionaries, could ever be. I have been using the Linux dict command for many years and I have come to depend on it.

Install dict on Linux

The dict program is not installed by default on Fedora, but it's easy to install. Here is how to install it on Fedora and similar programs:

$ sudo dnf install dictd

On Debian and similar programs, you must also install the dictionary definitions:

$ sudo apt install dictd dict-gcide

No additional configuration is required. The minimalistic /usr/share/doc/dictd/dict1.conf file specifies the remote server for the dictionary databases. This tool uses the Dictionary Server Protocol (DICT) on port 2628.

Use dict on Linux

In a terminal session as a non-root user, type dict to get a list of definitions from one or more dictionaries and the thesaurus. For example, look up the word memory this way.

$ dict memory | less 6 definitions found From The Collaborative International Dictionary of English v.0.48 [gcide]: Memory \Mem"o*ry\, n.; pl. {Memories}. [OE. memorie, OF. memoire, memorie, F. m['e]moire, L. memoria, fr. memor mindful; cf. mora delay. Cf. {Demur}, {Martyr}, {Memoir}, {Remember}.] [1913 Webster] 1. The faculty of the mind by which it retains the knowledge of previous thoughts, impressions, or events. [1913 Webster] Memory is the purveyor of reason. --Rambler. [1913 Webster] 2. The reach and positiveness with which a person can remember; the strength and trustworthiness of one's power to reach and represent or to recall the past; as, his memory was never wrong. [1913 Webster] From WordNet (r) 3.0 (2006) [wn]: memory n 1: something that is remembered; "search as he would, the memory was lost" 2: the cognitive processes whereby past experience is remembered; "he can do it from memory"; "he enjoyed remembering his father" [syn: {memory}, {remembering}] 3: the power of retaining and recalling past experience; "he had From Moby Thesaurus II by Grady Ward, 1.0 [moby-thesaurus]: 78 Moby Thesaurus words for "memory": RAM, anamnesis, anniversaries, archetypal pattern, archetype, awareness, celebrating, celebration, ceremony, cognizance, commemoration, consciousness, disk memory, dressing ship, From The Free On-line Dictionary of Computing (30 December 2018) [foldoc]: memory These days, usually used synonymously with {Random Access Memory} or {Read-Only Memory}, but in the general sense it can be any device that can hold {data} in {machine-readable} format. (1996-05-25) From Bouvier's Law Dictionary, Revised 6th Ed (1856) [bouvier]: MEMORY, TIME OF. According to the English common law, which has been altered by 2 & 3 Wm. IV., c. 71, the time of memory commenced from the reign of

I have cut large sections of this result to save space while leaving enough information to provide an idea of what typical results look like. You can also look up multi-word phrases by enclosing them in quotes, either double or single.

$ dict "air gapped"Dictionaries

The dict command uses several online dictionaries, including legal and technical ones. Dictionaries are also available for many languages. You can list the available dictionary databases as shown here:

$ dict -D | less Databases available: gcide The Collaborative International Dictionary of English v.0.48 wn WordNet (r) 3.0 (2006) moby-thesaurus Moby Thesaurus II by Grady Ward, 1.0 elements The Elements (07Nov00) vera V.E.R.A. -- Virtual Entity of Relevant Acronyms (February 2016) jargon The Jargon File (version 4.4.7, 29 Dec 2003) foldoc The Free On-line Dictionary of Computing (30 December 2018) easton Easton's 1897 Bible Dictionary hitchcock Hitchcock's Bible Names Dictionary (late 1800's) bouvier Bouvier's Law Dictionary, Revised 6th Ed (1856) devil The Devil's Dictionary (1881-1906) world02 CIA World Factbook 2002 gaz2k-counties U.S. Gazetteer Counties (2000) gaz2k-places U.S. Gazetteer Places (2000) gaz2k-zips U.S. Gazetteer Zip Code Tabulation Areas (2000) fd-hrv-eng Croatian-English FreeDict Dictionary ver. 0.1.2 fd-fin-por suomi-português FreeDict+WikDict dictionary ver. 2018.09.13 fd-fin-bul suomi-български език FreeDict+WikDict dictionary ver. 2018.09.13 fd-fra-bul français-български език FreeDict+WikDict dictionary ver. 2018.09.13 fd-deu-swe Deutsch-Svenska FreeDict+WikDict dictionary ver. 2018.09.13

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles

You can specify individual dictionaries with the -d option:

$ dict -d gcideEndmost Dictum

Sometimes using words found in the thesaurus is not the best approach to writing as it can obfuscate your meaning. But I do find that the dict command can be immensely useful in choosing the best word for a specific meaning. It also ensures that the words I use are spelled correctly.

There's a dearth of information about dict. The URL http://www.dict.org/ provides only a web-based interface to the dictionaries. The man page covers syntax. But the command is a useful and fun command to have handy. I admit that after discovering the dict command I spent many hours of the day just trying different things to see what the result would be. I was the kid who read the encyclopedia and dictionary. Yes, I was that kid. In addition to being a useful tool when writing or reading, dict can also be a fun tool to satisfy a bit of curiosity.

The dict command on Linux is useful for writers to access a plethora of dictionaries and synonyms for their word choices.

Image by:

Original photo by jetheriot. Modified by Rikki Endsley. CC BY-SA 2.0.

Linux Command line What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Reinvent your release strategy with an API gateway

Mon, 02/06/2023 - 16:00
Reinvent your release strategy with an API gateway iambobur Mon, 02/06/2023 - 03:00

One benefit of moving to an API-based architecture is that you can iterate quickly and deploy new changes to our services. There is also the concept of traffic and routing established with an API gateway for the modernized part of the architecture. API gateway provides stages to allow you to have multiple deployed APIs behind the same gateway and is capable of in-place updates with no downtime. Using an API gateway enables you to leverage the service's numerous API management features, such as authentication, rate throttling, observability, multiple API versioning, and stage deployment management (deploying an API in multiple stages such as dev, test, stage, and prod).

Open source API gateway (Apache APISIX and Traefik) and service mesh (Istio and Linkerd) solutions are capable of doing traffic splitting and implementing functionalities like canary release and blue green deployment. With canary testing, you can make a critical examination of a new release of an API by selecting only a small portion of your user base. 

What is a canary release?

A canary release introduces a new version of the API and flows a small percentage of the traffic to the canary. In API gateways, traffic splitting makes it possible to gradually shift or migrate traffic from one version of a target service to another. For example, a new version, v1.1, of a service can be deployed alongside the original, v1.0. Traffic shifting enables you to canary test or release your new service by at first only routing a small percentage of user traffic, say 1%, to v1.1, then shifting all of your traffic over time to the new service.

Image by:

(Bobur Umurzokov, CC BY-SA 4.0)

Explore the open source cloud Free online course: Developing cloud-native applications with microservices eBook: Modernize your IT with managed cloud services Try for 60 days: Red Hat OpenShift Dedicated Free online course: Containers, Kubernetes and Red Hat OpenShift What is Kubernetes? Understanding edge computing Latest articles for IT architects

This allows you to monitor the new service, look for technical problems, such as increased latency or error rates, and look for the desired business impact. This includes checking for an increase in key performance indicators like customer conversion ratio or the average shopping checkout value. Traffic splitting enables you to run A/B or multivariate tests by dividing traffic destined for a target service between multiple versions of the service. For example, you can split traffic 50/50 across your v1.0 and v1.1 of the target service and see which performs better over a specific period of time. Learn more about the traffic split feature in Apache APISIX Ingress Controller.

When appropriate, canary releases are an excellent option, as the percentage of traffic exposed to the canary is highly controlled. The trade-off is that the system must have good monitoring in place to be able to quickly identify an issue and roll back if necessary (which can be automated). This guide shows you how to use Apache APISIX and Flagger to quickly implement a canary release solution.

Image by:

(Bobur Umurzokov, CC BY-SA 4.0)

Traffic mirroring

In addition to using traffic splitting to run experiments, you can also use traffic mirroring to copy or duplicate traffic. You can send this to an additional location or a series of locations. Frequently with traffic mirroring, the results of the duplicated requests are not returned to the calling service or end user. Instead, the responses are evaluated out-of-band for correctness. For instance, it compares the results generated by a refactored and existing service.

Image by:

(Bobur Umurzokov, CC BY-SA 4.0)

Using traffic mirroring enables you to "dark release" services, where a user is kept in the dark about the new release, but you can internally observe for the required effect.

Implementing traffic mirroring at the edge of systems has become increasingly popular over the years. APISIX offers the proxy-mirror plugin to mirror client requests. It duplicates the real online traffic to the mirroring service and enables specific analysis of the online traffic or request content without interrupting the online service.

What is a blue green deployment?

Blue green deployment is usually implemented at a point in the architecture that uses a router, gateway, or load balancer. Behind this sits a complete blue environment and a green environment. The current blue environment represents the current live environment, and the green environment represents the next version of the stack. The green environment is checked prior to switching to live traffic. When it goes live, the traffic is flipped over from blue to green. The blue environment is now off, but if a problem is spotted the rollback is quick. The next change is to go from green to blue, oscillating from the first release onward.

Image by:

(Bobur Umurzokov, CC BY-SA 4.0)

Blue green works well due to its simplicity, and it is one of the better deployment options for coupled services. It is also easier to manage persisting services, though you still need to be careful in the event of a rollback. It also requires double the number of resources to be able to run cold in parallel to the currently active environment.

Traffic management with Argo Rollouts

The strategies discussed add a lot of value, but the rollout itself is a task that you would not want to have to manage manually. This is where a tool such as Argo Rollouts is valuable for demonstrating some of the concerns discussed.

Using Argo, it is possible to define a Rollout CRD that represents the strategy you can take for rolling out a new canary of your API. A custom resource definition (CRD) allows Argo to extend the Kubernetes API to support rollout behavior. CRDs are a popular pattern with Kubernetes. They allow the user to interact with one API with the extension to support different features.

You can use the Apache APISIX and Apache APISIX Ingress Controller for traffic management with Argo Rollouts. This guide shows you how to integrate ApisixRoute with Argo Rollouts using it as a weighted round-robin load balancer.

Summary

The ability to separate the deployment and release of service (and corresponding API) is a powerful technique, especially with the rise in the progressive delivery approach. A canary release service can make use of the API gateway traffic split and mirroring features, and provides a competitive advantage. This helps your business with both mitigating risk of a bad release and also understanding your customer's requirements.

This article was originally published on the API7.ai blog and has been republished with permission.

The ability to separate the deployment and release of service (and corresponding API) is a powerful technique, especially with the rise in the progressive delivery approach.

Image by:

opensource.com

CI/CD Kubernetes Cloud What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Open source video captioning on Linux

Sat, 02/04/2023 - 16:00
Open source video captioning on Linux sethkenlon Sat, 02/04/2023 - 03:00

In a perfect world, all videos would have transcripts, and live videos would have captioning. It's not just a requirement for people without hearing to be able to participate in pop culture and video chats, it's a luxury for people with hearing who just prefer to read what's been said. Not all software has captioning built-in though, and some that does relies on third-party cloud services to function. Live Captions is an application for the Linux desktop that provides instant, local, and open source captioning for video.

Install Live Captions

You can install Live Captions as a Flatpak.

If your Linux distribution doesn't ship with a software center, install it manually from a terminal. First, add the Flathub repository:

$ flatpak remote-add --if-not-exists flathub \ https://flathub.org/repo/flathub.flatpakrepo

Next, install the application:

$ flatpak install flathub net.sapples.LiveCaptionsLaunch Live Captions

To start Live Captions, launch it from your application menu.

Alternatively, you can start it from a terminal using the flatpak command:

$ flatpak run net.sapples.LiveCaptions

You can also use a command like Fuzzpak:

$ fuzzpak LiveCaptions

When Live Captions first starts, you're presented with a configuration screen.

Image by:

(Seth Kenlon, CC BY-SA 4.0)

You can set the font, font size, colors, and more. By default, text that Live Captions isn't 100% confident about is presented in a darker color than your chosen font color. If you're using Live Captions as a convenience, this probably isn't necessary, but if you can't hear the video, then it's good to get an idea of words that may not be correct.

You can return to the preferences screen anytime, so your choices don't have to be final.

Using Live Captions

Once Live Captions is running, any English words coming through your system sound are printed to the Live Captions window.

Image by:

(Seth Kenlon, CC BY-SA 4.0)

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles

This isn't a cloud service. There are no API keys required. There's no telemetry or spying and no data collection. In fact, it doesn't even require network permissions. Live Captions is open source, so there are no proprietary services or libraries in use.

To change the sound input, click the Microphone icon in the top left of the Live Captions window. To open the Preferences window, click on the Gear icon in the bottom left of the Live Captions window.

Open access

In my experience, the results of Live Captions are good. They're not perfect, but in small Jitsi video calls, it's excellent. Even with niche videos (rowdy tournaments of Warhammer 40,000, for instance) it does surprisingly well, stumbling over only the most fictional of sci-fi terminology.

Making open source accessible is vital, and in the end it has the potential to benefit everyone. I don't personally require Live Captions, but I enjoy using it when I don't feel like listening to a video. I also use it when I want help to focus on something that I might otherwise be distracted away from. Live Captions isn't just a fun open source project, it's an important one.

Live Captions is an application for the Linux desktop that provides instant, local, and open source captioning for video.

Image by:

CC BY 3.0 US Mapbox Uncharted ERG

Linux Accessibility What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How upstream contributions power scientific research

Fri, 02/03/2023 - 16:00
How upstream contributions power scientific research cdelia Fri, 02/03/2023 - 03:00

Horizon Europe emphasizes open science and open source technology. The program evolved from Horizon 2020, which provided financial support for research projects that promoted industrial competitiveness, advanced scientific excellence, or solved social challenges through the process of "open science."

Open science is an approach to the scientific process based on open cooperative work, tools, and diffusing knowledge found in the Horizon Europe Regulation and Model Grant Agreement. This open science approach aligns with open source principles that provide a structure for such cooperation.

The open source principles are:

  • Transparency
  • Collaboration
  • Release early, release often
  • Inclusion
  • Community orientation

In creating open source software, one of the basic foundational principles of open source software development is an "upstream first" philosophy. The opposite direction is "downstream," and upstream and downstream make up the ecosystem for a given software package or distribution. Upstreams are important because that's where the source contribution comes from.

Open science and sustainability Video series: ChRIS (ChRIS Research Integration System) Explore Red Hat Research projects 6 articles to inspire open source sustainability How Linux rescues slow computers (and the planet) Latest articles about open science Latest articles about open education Latest articles about sustainability

Each upstream is unique, but generally, the upstream is where decisions are made and where the community for a project collaborates for the project's objectives. Work done upstream can flow out to many other open source projects. The upstream is also a place where developers can report bugs and security vulnerabilities. If a bug or security flaw is fixed upstream, then every downstream project or product based on the upstream can benefit from that work.

It is important to contribute to the work side-by-side with the rest of the community from which you benefit. By working upstream first, there is the opportunity to vet ideas with the larger community and work together to build new features, releases, content, etc. It's far better if all the contributors work together rather than contributors from different companies, universities, or affiliations working on features behind closed doors and then trying to integrate them later. Open source contributions can outlive the research project duration making a more durable impact.

As an example of such contributions, in the ORBIT FP7 EU project, a feature was developed by Red Hat (lower layers, such as Linux Kernel and QEMU) and Umea University (upper layers, such as LibVirt and OpenStack) and contributed to their related upstream communities. This enabled "post-copy live migration of VMs" in OpenStack. Even though that was done several years ago, that feature is still available (and independently maintained) in any OpenStack distribution today (as well as plain LibVirt and QEMU).

Just as with software development, research under Horizon Europe promotes the adoption of sharing research outputs as early and widely as possible to citizen science, developing new indicators for evaluation research, and rewarding researchers. With open source upstream communities, the research contributed can extend beyond the research project timeline by feeding into the upstream life cycle. This allows future consumption by companies, universities, governments, etc., to evolve and further secure the research's project contribution.

This article originally appeared on The Impact of Upstreaming Research Contributions and is republished with permission.

Just as with software development, research under Horizon Europe promotes the adoption of sharing research outputs as early and widely as possible to citizen science, developing new indicators for evaluation research, and rewarding researchers.

Image by:

Opensource.com

Science OpenStack Community management What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. 870 points Germany

Leslie Hawthorn has spent her career creating and cultivating open source communities. She has driven open source strategy in Fortune 10 companies, pre-IPO startups, and Foundation Boards including senior roles at Red Hat, Google, the Open Source Initiative, and Elastic. She currently leads the industry verticals community strategy team within Red Hat’s Open Source Program Office. She advocates for creating citizen-centric Smart Cities underpinned by open source and open standards, and spends her spare time on tech for social good projects. Born and raised in Silicon Valley, she has called Europe home for the past seven years and resides in Bonn.

| Follow lhawthorn | Connect lesliehawthorn Open Source Evangelist Author Contributor Club 31 points Spain

Have been designing and implementing IaaS/PaaS solutions, namelly OpenStack and Kubernetes/OpenShift, for the last 8 years, and teaching postgraduate courses for the last 7 years.

| Connect jose-castillo-lema Open Enthusiast Author Register or Login to post a comment.

How I apply open source principles to filmmaking

Fri, 02/03/2023 - 16:00
How I apply open source principles to filmmaking psubhashish Fri, 02/03/2023 - 03:00

As a nonfiction filmmaker, I have made over nine films, all under open licenses. But that choice always comes at a cost. It's tricky, if not impossible, to release a film under an open license if the film's copyright owner does not fully own the footage used. Films often purchase rights for media produced by others to be able to use in their work legally. As my films are mostly in endangered or low-resource languages with little or no pre-existing media, the option for purchasing existing media is often out of the question.

On the other hand, film productions often record hours of footage and audio but only use a small percentage of those in the film. Footage that might not have an immediate use for a film's production house can be useful for others. In my case, many interviewed marginalized communities have a moral ownership over the footage. But researchers and others who interview them do not always provide communities with direct and open access. For these ethical and practical reasons, it is a good idea to share footage and the film under an open license and inform the communities interviewed or featured.

There are many reasons filmmakers cannot release films under an open license, but this post is for those who somehow can. I often imagined what the open source equivalent would be for films that adhere to the Openness philosophy. Enter "Open Filmmaking," a framework that encourages releasing the source code of a film, i.e., footage under open licenses, and actively uses other practices such as open source software and open multimedia resources.

In my two recent documentary film projects, "The Volunteer Archivists" and "Nani Ma," I utilized media with open licenses and various open source software (FLOSS). The films explore the areas of citizen science, archiving public domain text, documentation of oral history, and the use of open licenses and open source, as well as volunteerism.

The Volunteer Archivists

"The Volunteer Archivists" follows the sixteen-year journey of digital archiving of texts by a volunteer-led group called Srujanika from Bhubaneswar in the Indian state of Odisha. Founded in 1983 by a scientist couple, they have managed with a small workforce to archive over 10,000 volumes of books, magazines, and other periodicals published in the Odia language since the early 1800s. They now host the archived texts online at OdiaBibhaba.in. Their original work includes growing a citizen science community over two decades to promote popular science education outside textbooks, publishing a monthly magazine called "Bigyana Tarang," and several illustrated publications. As scanners became more affordable, they began the process of archiving in 2006, starting with the "Purnnachandra Odia Bhashakosha," a seven-volume lexicon from 1930-1940 that powers the Odia Wiktionary. Srujanika also contributed to the Wikimedia movement by localizing computing terms into Odia and creating a manual for Odia computer translation style and convention. Linux distributions and FLOSS, like LibreOffice, were localized in Odia because of their effort.

Nani Ma

"Nani Ma" is based on oral history that was never recorded in audiovisual media, narrated by the late Musamoni Panigrahi in an early 1900s register of the Baleswari/northern dialect of Odia. Musamoni was my grandmother, and it was quite late by the time I realized how unique her stories, songs, and narration style were. The register, storytelling, and overall oral history are important pieces of history as they were strongly influenced by the Orissa famine of 1866, a direct impact of the British colonization of India.

The entire footage and supporting multimedia files used in "Nani Ma" are now available on the Internet Archive under a CC BY-SA 4.0 License, and the film will be available to the general public after screening in film festivals.

In addition to using openly-licensed media and releasing the production media (video footage, audio recordings, still images, and promotional graphics) from these two films under open licenses, they also saw the use of several FLOSS. Some of the software includes Audacity for audio editing and codec conversion, HandBrake for video conversions, extensive use of Inkscape and GIMP, respectively, for all vector and raster image editing, Scribus for typesetting documents, and typefaces under Open Font License (available on Google Fonts and other places). I also used openly-licensed images from Wikimedia Commons and audio from freesound.org.

Open Filmmaking can be fun and challenging at the same time.

Open multimedia and art resources Music and video at the Linux terminal 26 open source creative apps to try this year Film series: Open Source Stories Blender cheat sheet Kdenlive cheat sheet GIMP cheat sheet Latest audio and music articles Latest video editing articles Wrap up

Recommendations based on technical, ethical, and licensing constraints:

  • Most open source software often comes with beta releases which are many steps ahead of their stable releases. While they can add a modern flare to work, they aren't tested like the stable versions are. Use the stable version for mission-critical projects.
  • Save your work-in-progress frequently.
  • If you're new to open source creative tools, then budget time to learn new software.
  • Budget time to create metadata. Audiovisual media without metadata is hard to find.
  • Upload media publicly if you have consent from the people featured. B-rolls could sometimes include private data in the audio (as well as in the video). Redact such information before uploading. Uploading raw and unedited video featuring people always needs scrutiny.
  • Use a Creative Commons license for releasing audiovisual content you own. But the license spectrum can be confusing. Use a tool like License Chooser to evaluate what license makes the most sense.
  • Use open codecs and other open multimedia resources while preparing files for uploading.

Lastly, finding a public hosting platform that shares the Openness movement's values can be intimidating. Internet Archive is my personal choice as it is helpful to create Collections under which files of different kinds can be uploaded (see "The Volunteer Archivists" and "Nani Ma" collections). Happy open-filmmaking!

Open Filmmaking is a framework that encourages releasing the source code of a film, i.e., footage under open licenses, and actively uses other practices such as open source software and open multimedia resources.

Image by:

Opensource.com

Video editing Art and design Open Studio Licensing What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Learn Basic by coding a game

Thu, 02/02/2023 - 16:00
Learn Basic by coding a game Moshe Zadka Thu, 02/02/2023 - 03:00

Writing the same application in multiple languages is a great way to learn new ways to program. Most programming languages have certain things in common, such as:

  • Variables
  • Expressions
  • Statements

These concepts are the basis of most programming languages. Once you understand them, you can start figuring the rest out.

Programming languages usually share some similarities. Once you know one programming language, you can learn the basics of another by recognizing its differences.

Practicing with a standard program is a good way of learning a new language. It allows you to focus on the language, not the program's logic. I'm doing that in this article series using a "guess the number" program, in which the computer picks a number between one and 100 and asks you to guess it. The program loops until you guess the number correctly.

This program exercises several concepts in programming languages:

  • Variables
  • Input
  • Output
  • Conditional evaluation
  • Loops

It's a great practical experiment to learn a new programming language. This article focuses on Basic.

Guess the number in (Bywater) Basic

There is no real standard for the Basic programming language. Wikipedia says, "BASIC (Beginners' All-purpose Symbolic Instruction Code) is a family of general-purpose, high-level programming languages designed for ease of use." The BWBasic implementation is available under the GPL.

You can explore Basic by writing a version of the "guess the number" game.

Install Basic on Linux

In Debian or Ubuntu, you can install Basic with the following:

$ apt install -y bwbasic

Download the latest release tarball for Fedora, CentOS, Mageia, and any other Linux distribution. Extract it, make it executable, and then run it from a terminal:

$ tar --extract --file bwbasic*z $ chmod +x bywater $ ./bywater 

On Windows, download the .exe release.

Basic code

Here is my implementation:

10 value$ = cint(rnd * 100) + 1 20 input "enter guess"; guess$ 30 guess$ = val(guess$) 40 if guess$ < value$ then print "Too low" 50 if guess$ > value$ then print "Too high" 60 if guess$ = value$ then 80 70 goto 20 80 print "That's right"

Programming and development Red Hat Developers Blog Programming cheat sheets Try for free: Red Hat Learning Subscription eBook: An introduction to programming with Bash Bash shell scripting cheat sheet eBook: Modernizing Enterprise Java An open source developer's guide to building applications

Basic programs can be numbered or unnumbered. Usually, it is better to write programs unnumbered, but writing them with numbered lines makes it easier to refer to individual lines.

By convention, coders write lines as multiples of 10. This approach allows interpolating new lines between existing ones for debugging. Here's an explanation of my method above:

  • Line 10: Computes a random value between 1 and 100 using the built-in rnd function, which generates a number between 0 and 1, not including 1.
  • Line 20: Asks for a guess and puts the value in the guess$ scalar variable. Line 30 converts the value to a numeric one.
  • Lines 40 and 50: Give the guesser feedback, depending on the comparison.
  • Line 70: Goes to the beginning of the loop.
  • Line 60: Breaks& the loop by transferring control to line 80. Line 80 is the last line, so the program exits after that.
Sample output

The following is an example of the program after putting it in program.bas:

$ bwbasic program.bas  Bywater BASIC Interpreter/Shell, version 2.20 patch level 2 Copyright (c) 1993, Ted A. Campbell Copyright (c) 1995-1997, Jon B. Volkoff enter guess? 50 Too low enter guess? 75 Too low enter guess? 88 Too high enter guess? 80 Too low enter guess? 84 Too low enter guess? 86 Too high enter guess? 85 That's rightGet started

This "guess the number" game is a great introductory program for learning a new programming language because it exercises several common programming concepts in a pretty straightforward way. By implementing this simple game in different programming languages, you can demonstrate some core concepts of the languages and compare their details.

Do you have a favorite programming language? How would you write the "guess the number" game in it? Follow this article series to see examples of other programming languages that might interest you!

This tutorial lets you explore Basic by writing a version of the "guess the number" game.

Programming What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

A guide to fuzzy queries with Apache ShardingSphere

Wed, 02/01/2023 - 16:00
A guide to fuzzy queries with Apache ShardingSphere xionggaoxiang Wed, 02/01/2023 - 03:00

Apache ShardingSphere is an open source distributed database and an ecosystem users and developers need for their databases to provide a customized and cloud-native experience. Its latest release contains many new features, including data encryption integrated with existing SQL workflows. Most importantly, it allows fuzzy queries of the encrypted data.

The problem

By parsing a user's SQL input and rewriting the SQL according to the user's encryption rules, the original data is encrypted and stored with ciphertext data in the underlying database simultaneously.

When a user queries the data, it fetches the ciphertext data from the database, decrypts it, and returns the decrypted original data to the user. However, because the encryption algorithm encrypts the whole string, users cannot run fuzzy queries.

Nevertheless, many businesses need fuzzy queries after the data is encrypted. In version 5.3.0, Apache ShardingSphere provides users with a default fuzzy query algorithm that supports encrypted fields. The algorithm also supports hot plugging, which users can customize. The fuzzy query can be achieved through configuration.

How to achieve fuzzy query in encrypted scenarios Load data to the in-memory database (IMDB)

First, load all the data into the IMDB to decrypt it. Then, it'll be like querying the original data. This method can achieve fuzzy queries. If the amount of data is small, this method will prove simple and cost-effective. However, if the quantity of data is large, it'll be a disaster.

Implement encryption and decryption functions consistent with database programs

The second method is to modify fuzzy query conditions and use the database decryption function to decrypt data first and then implement fuzzy query. This method's advantage is the low cost of implementation, development, and use.

Users only need to modify the previous fuzzy query conditions slightly. However, the ciphertext and encryption functions are stored together in the database, which cannot cope with the problem of account data leaks.

Native SQL:

select * from user where name like "%xxx%"

After implementing the decryption function:

ѕеlесt * frоm uѕеr whеrе dесоdе(namе) lіkе "%ххх%"Store after data masking

Implement data masking on ciphertext and then store it in a fuzzy query column. This method could lack precision.

For example, mobile number 13012345678 becomes 130****5678 after the masking algorithm is performed.

Perform encrypted storage after tokenization and combination

This method performs tokenization and combination on ciphertext data and then encrypts the resultset by grouping characters with fixed length and splitting a field into multiple ones. For example, we take four English characters and two Chinese characters as a query condition: ningyu1 uses the four-character as a group to encrypt, so the first group is ning, the second group ingy, the third group ngyu, the fourth group gyu1, and so on. All the characters are encrypted and stored in the fuzzy query column. If you want to retrieve all data that contains four characters, such as ingy, encrypt the characters and use a key like"%partial%" to query.

Shortcomings:

  1. Increased storage costs: Free grouping will increase the amount of data, and the data length will increase after being encrypted.
  2. Limited length in fuzzy query: Due to security issues, the length of free grouping cannot be too short or the rainbow table will easily crack it. Like the example I mentioned above, the length of fuzzy query characters must be greater than or equal to four letters/digits or two Chinese characters.

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources Single-character digest algorithm (default fuzzy query algorithm provided in ShardingSphere version 5.3.0)

Although the above methods are all viable, it's only natural to wonder if there's a better alternative. In our community, we find that single-character encryption and storage can balance performance and query but fails to meet security requirements.

So what's the ideal solution? Inspired by masking algorithms and cryptographic hash functions, we find that data loss and one-way functions can be used.

The cryptographic hash function should have the following four features:

  1. It should be easy to calculate the hash value for any given message.
  2. It should be difficult to infer the original message from a known hash value.
  3. It should not be feasible to modify the message without changing the hash value.
  4. There should only be a very low chance that two different messages produce the same hash value.

Security: Because of the one-way function, it's impossible to infer the original message. To improve the accuracy of the fuzzy query, we want to encrypt a single character, but the rainbow table will crack it.

So we take a one-way function (to ensure every character is the same after encryption) and increase the frequency of collisions (to ensure every string is 1: N backward), which greatly enhances security.

Fuzzy query algorithm

Apache ShardingSphere implements a universal fuzzy query algorithm using the below single-character digest algorithm org.apache.shardingsphere.encrypt.algorithm.like.CharDigestLikeEncryptAlgorithm.

public final class CharDigestLikeEncryptAlgorithm implements LikeEncryptAlgorithm { private static final String DELTA = "delta"; private static final String MASK = "mask"; private static final String START = "start"; private static final String DICT = "dict"; private static final int DEFAULT_DELTA = 1; private static final int DEFAULT_MASK = 0b1111_0111_1101; private static final int DEFAULT_START = 0x4e00; private static final int MAX_NUMERIC_LETTER_CHAR = 255; @Getter private Properties props; private int delta; private int mask; private int start; private Map charIndexes; @Override public void init(final Properties props) { this.props = props; delta = createDelta(props); mask = createMask(props); start = createStart(props); charIndexes = createCharIndexes(props); } private int createDelta(final Properties props) { if (props.containsKey(DELTA)) { String delta = props.getProperty(DELTA); try { return Integer.parseInt(delta); } catch (NumberFormatException ex) { throw new EncryptAlgorithmInitializationException("CHAR_DIGEST_LIKE", "delta can only be a decimal number"); } } return DEFAULT_DELTA; } private int createMask(final Properties props) { if (props.containsKey(MASK)) { String mask = props.getProperty(MASK); try { return Integer.parseInt(mask); } catch (NumberFormatException ex) { throw new EncryptAlgorithmInitializationException("CHAR_DIGEST_LIKE", "mask can only be a decimal number"); } } return DEFAULT_MASK; } private int createStart(final Properties props) { if (props.containsKey(START)) { String start = props.getProperty(START); try { return Integer.parseInt(start); } catch (NumberFormatException ex) { throw new EncryptAlgorithmInitializationException("CHAR_DIGEST_LIKE", "start can only be a decimal number"); } } return DEFAULT_START; } private Map createCharIndexes(final Properties props) { String dictContent = props.containsKey(DICT) && !Strings.isNullOrEmpty(props.getProperty(DICT)) ? props.getProperty(DICT) : initDefaultDict(); Map result = new HashMap<>(dictContent.length(), 1); for (int index = 0; index < dictContent.length(); index++) { result.put(dictContent.charAt(index), index); } return result; } @SneakyThrows private String initDefaultDict() { InputStream inputStream = CharDigestLikeEncryptAlgorithm.class.getClassLoader().getResourceAsStream("algorithm/like/common_chinese_character.dict"); LineProcessor lineProcessor = new LineProcessor() { private final StringBuilder builder = new StringBuilder(); @Override public boolean processLine(final String line) { if (line.startsWith("#") || 0 == line.length()) { return true; } else { builder.append(line); return false; } } @Override public String getResult() { return builder.toString(); } }; return CharStreams.readLines(new InputStreamReader(inputStream, Charsets.UTF_8), lineProcessor); } @Override public String encrypt(final Object plainValue, final EncryptContext encryptContext) { return null == plainValue ? null : digest(String.valueOf(plainValue)); } private String digest(final String plainValue) { StringBuilder result = new StringBuilder(plainValue.length()); for (char each : plainValue.toCharArray()) { char maskedChar = getMaskedChar(each); if ('%' == maskedChar) { result.append(each); } else { result.append(maskedChar); } } return result.toString(); } private char getMaskedChar(final char originalChar) { if ('%' == originalChar) { return originalChar; } if (originalChar <= MAX_NUMERIC_LETTER_CHAR) { return (char) ((originalChar + delta) & mask); } if (charIndexes.containsKey(originalChar)) { return (char) (((charIndexes.get(originalChar) + delta) & mask) + start); } return (char) (((originalChar + delta) & mask) + start); } @Override public String getType() { return "CHAR_DIGEST_LIKE"; } }
  • Define the binary mask code to lose precision 0b1111_0111_1101 (mask).
  • Save common Chinese characters with disrupted order like a map dictionary.
  • Obtain a single string of Unicode for digits, English, and Latin.
  • Obtain an index for a Chinese character belonging to a dictionary.
  • Other characters fetch the Unicode of a single string.
  • Add 1 (delta) to the digits obtained by different types above to prevent any original text from appearing in the database.
  • Then convert the offset Unicode into binary, perform the AND operation with mask, and carry out a two-bit digit loss.
  • Directly output digits, English, and Latin after the loss of precision.
  • The remaining characters are converted to decimal and output with the common character start code after the loss of precision.
The fuzzy algorithm development progress The first edition

Simply use Unicode and mask code of common characters to perform the AND operation.

Mask: 0b11111111111001111101 The original character: 0b1000101110101111讯 After encryption: 0b1000101000101101設

Assuming we know the key and encryption algorithm, the original string after a backward pass is:

1.0b1000101100101101 謭 2.0b1000101100101111 謯 3.0b1000101110101101 训 4.0b1000101110101111 讯 5.0b1000101010101101 読 6.0b1000101010101111 誯 7.0b1000101000101111 訯 8.0b1000101000101101 設

Based on the missing bits, we find that each string can be derived 2^n Chinese characters backward. When the Unicode of common Chinese characters is decimal, their intervals are very large. Notice that the Chinese characters inferred backward are not common characters, and it's more likely to infer the original characters.

Image by:

(Xiong Gaoxiang, CC BY-SA 4.0)

The second edition

The interval of common Chinese characters in Unicode is irregular. We planned to leave the last few bits of Chinese characters in Unicode and convert them into decimal as an index to fetch some common Chinese characters. This way, when the algorithm is known, uncommon characters won't appear after a backward pass, and distractors are no longer easy to eliminate.

If we leave the last few bits of Chinese characters in Unicode, it has something to do with the relationship between the accuracy of fuzzy query and anti-decryption complexity. The higher the accuracy, the lower the decryption difficulty.

Let's take a look at the collision degree of common Chinese characters under our algorithm:

1. When mask=0b0011_1111_1111:

Image by:

(Xiong Gaoxiang, CC BY-SA 4.0)

2. When mask=0b0001_1111_1111:

Image by:

(Xiong Gaoxiang, CC BY-SA 4.0)

For the mantissa of Chinese characters, leave 10 and 9 digits. The 10-digit query is more accurate because its collision is much weaker. Nevertheless, if the algorithm and the key are known, the original text of the 1:1 character can be derived backward.

The nine-digit query is less accurate because nine-digit collisions are relatively stronger, but there are fewer 1:1 characters. Although we change the collisions regardless of whether we leave ten or nine digits, the distribution is unbalanced due to the irregular Unicode of Chinese characters. The overall collision probability cannot be controlled.

The third edition

In response to the unevenly distributed problem found in the second edition, we take common characters with disrupted order as the dictionary table.

1. The encrypted text first looks up the index in the out-of-order dictionary table. We use the index and subscript to replace the Unicode without rules. Use Unicode in case of uncommon characters. (Note: Evenly distribute the code to be calculated as far as possible.)

2. The next step is to perform the AND operation with a mask and lose two-bit precision to increase the frequency of collisions.

Let's take a look at the collision degree of common Chinese characters under our algorithm:

1. When mask=0b1111_1011_1101:

Image by:

(Xiong Gaoxiang, CC BY-SA 4.0)

2. When mask=0b0111_1011_1101:

Image by:

(Xiong Gaoxiang, CC BY-SA 4.0)

When the mask leaves 11 bits, you can see that the collision distribution is concentrated at 1:4. When the mask leaves ten bits, the number becomes 1:8. At this time, we only need to adjust the number of precision losses to control whether the collision is 1:2, 1:4 or 1:8.

If the mask is selected as 1, and the algorithm and key are known, there will be a 1:1 Chinese character because we calculate the collision degree of common characters at this time. If we add the missing four bits before the 16-bit binary of Chinese characters, the situation becomes 2^5=32 cases.

Since we encrypt the whole text, even if the individual character is inferred backward, there will be little impact on overall security and will not cause mass data leaks. At the same time, the premise of backward pass is to know the algorithm, key, delta, and dictionary, so it's impossible to achieve from the data in the database.

How to use fuzzy query

Fuzzy query requires the configuration of encryptors (encryption algorithm configuration), likeQueryColumn (fuzzy query column name), and likeQueryEncryptorName (encryption algorithm name of fuzzy query column ) in the encryption configuration.

Please refer to the following configuration. Add your own sharding algorithm and data source.

dataSources: ds_0: dataSourceClassName: com.zaxxer.hikari.HikariDataSource driverClassName: com.mysql.jdbc.Driver jdbcUrl: jdbc:mysql://127.0.0.1:3306/test?allowPublicKeyRetrieval=true username: root password: root rules: - !ENCRYPT encryptors: like_encryptor: type: CHAR_DIGEST_LIKE aes_encryptor: type: AES props: aes-key-value: 123456abc tables: user: columns: name: cipherColumn: name encryptorName: aes_encryptor assistedQueryColumn: name_ext assistedQueryEncryptorName: aes_encryptor likeQueryColumn: name_like likeQueryEncryptorName: like_encryptor phone: cipherColumn: phone encryptorName: aes_encryptor likeQueryColumn: phone_like likeQueryEncryptorName: like_encryptor queryWithCipherColumn: true props: sql-show: true

Insert

Logic SQL: insert into user ( id, name, phone, sex) values ( 1, '熊高祥', '13012345678', '男') Actual SQL: ds_0 ::: insert into user ( id, name, name_ext, name_like, phone, phone_like, sex) values (1, 'gyVPLyhIzDIZaWDwTl3n4g==', 'gyVPLyhIzDIZaWDwTl3n4g==', '佹堝偀', 'qEmE7xRzW0d7EotlOAt6ww==', '04101454589', '男')

Update

Logic SQL: update user set name = '熊高祥123', sex = '男1' where sex ='男' and phone like '130%' Actual SQL: ds_0 ::: update user set name = 'K22HjufsPPy4rrf4PD046A==', name_ext = 'K22HjufsPPy4rrf4PD046A==', name_like = '佹堝偀014', sex = '男1' where sex ='男' and phone_like like '041%'

Select

Logic SQL: select * from user where (id = 1 or phone = '13012345678') and name like '熊%' Actual SQL: ds_0 ::: select `user`.`id`, `user`.`name` AS `name`, `user`.`sex`, `user`.`phone` AS `phone`, `user`.`create_time` from user where (id = 1 or phone = 'qEmE7xRzW0d7EotlOAt6ww==') and name_like like '佹%'

Select: federated table sub-query

Logic SQL: select * from user LEFT JOIN user_ext on user.id=user_ext.id where user.id in (select id from user where sex = '男' and name like '熊%') Actual SQL: ds_0 ::: select `user`.`id`, `user`.`name` AS `name`, `user`.`sex`, `user`.`phone` AS `phone`, `user`.`create_time`, `user_ext`.`id`, `user_ext`.`address` from user LEFT JOIN user_ext on user.id=user_ext.id where user.id in (select id from user where sex = '男' and name_like like '佹%')

Delete

Logic SQL: delete from user where sex = '男' and name like '熊%' Actual SQL: ds_0 ::: delete from user where sex = '男' and name_like like '佹%'

The above example demonstrates how fuzzy query columns rewrite SQL in different SQL syntaxes to support fuzzy queries.

Wrap up

This article introduced you to the working principles of fuzzy query and used specific examples to demonstrate how to use it. I hope that through this article, you will have a basic understanding of fuzzy queries.

This article was originally published on Medium and has been republished with the author's permission.

Learn the working principles of fuzzy queries and follow along with specific examples of how to use them.

Databases What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

What's your community thinking pattern?

Wed, 02/01/2023 - 16:00
What's your community thinking pattern? Ron McFarland Wed, 02/01/2023 - 03:00

This article is the second and final part of a discussion of the research by Dave Logan, Bob King, and Halee Fischer-Wright.  If you haven't read the first part yet, you can do so here. These researchers defined five cultural thinking patterns in communities. In part one, I explained the first three of five thinking patterns. These communities are 20-150 people. I also suggested the responsibilities of an introducer-in-chief. This environmental thinking also refers to how the group behaves and how members talk to each other. To the researchers, each pattern has a identifying perspective:

  • Community thinking pattern #1 (the most negative): "Life is miserable"
  • Community thinking pattern #2: "My life is miserable"
  • Community thinking pattern #3: "I'm great"
  • Community thinking pattern #4: "We're great"
  • Community thinking pattern #5 (the most optimistic): "Life's great"

In this article, I continue with their impressions of community thinking pattern #4 and conclude with thinking pattern #5 (the most optimistic).

Community thinking pattern #4: Environment

In this thinking and behaving community pattern, all five open organization principles of transparency, inclusivity, adaptability, collaboration, and community become extremely important to succeed, are actively applied, and are visible. In community thinking pattern #4, a community member transitions from personal gains to direct group or community gains as the focus. As the leader (introducer-in-chief), your role as a direct supervisor is less important than community member delegation. At this stage, you are not as powerful and productive as the total collective community.

According to the researchers, the roles of clients, suppliers, friends, mentors, and coaches are blurred at this stage. The people in your community form and focus on value-based relationships and identities. They cluster in larger, more transparent networking groups and develop community pride, as at this level, they focus on common community core values and interdependent strategies.

These communities are best formed by assembling like-minded, cooperative individuals who want to advance a specific communal goal. Community members are mostly already at a high individual achieving level, like those in community thinking pattern #3. The community ignores organizational boundaries when looking for community candidates.

In the researcher's words, "We're great," and indirectly, "Others are not" is the thought process. These "others" could be even different groups within the same organization.

Based on raw talent and high collaboration, this community can be powerful and hard to break. If the wrong (but talented) people are let into this culture, it could be hard to fix. Cooperation and transparency can weaken when the wrong choices are made. During the selection process, there are almost no organizational boundaries. It could explore "on loan" members from other departments, contract or part-time workers, contributors, volunteers or free agents, or anyone dedicated to the community purpose.

Regarding collaboration, a community at this stage feels comfortable interacting with other community members only. If invited for a cup of coffee, these community members consider asking at least two other people to come along to get collaboration casually started.

When the team hits difficulties, they seek out where solutions might be found. To do that, they seek diverse parties and ideas. They continually review and discuss:

  1. What is going well.
  2. What is not working well.
  3. What the community can do to improve things.
Feelings within early community thinking pattern #4

Only community members who are individually successful in their specialty can move to community thinking pattern #4. If they have not excelled in their field, they come across as weak when they attempt to jump to thinking pattern #4.

Moving to community thinking pattern #4 requires group interaction and cooperation, not just individual talent. Therefore, open organization principles are needed, namely:

  • Building communities to achieve more.
  • Inclusivity of others (not just doing things alone or one-on-one giving directives and assignments).
  • Adaptability to a new style of getting things done, collaboration, and transparency.
Image by:

Modified by Ron McFarland from the researcher’s material (Ronald McFarland, CC BY-SA 4.0)

To advance to a community thinking pattern #4 culture, you must begin with the culture's core values and noble cause (center). These are presumably compelling to everyone in the community, or else they wouldn't join in the first place. Next, move to the outcomes (completing a series of doable, measurable tasks at a predetermined time, leading to a specific goal). Finally, acquire the assets required to accomplish the tasks.

These questions come up:

  1. What assets do we have right now, and what do we need to achieve the outcome? These could be equipment, technology, land, relationships and network, education, skills, goodwill, brand, public awareness, reputation, culture, or drive/passion.
  2. Are there enough assets to achieve what we want?
  3. Will the community's behavior accomplish the desired outcome?

The leader must consider the community's current culture and thinking to answer the above questions. Considerations include:

  • Who is in the community?
  • What do they care about?
  • What are they working on?
  • What does everyone want?
  • To support members, who can I introduce them to?

A big difference between community thinking patterns #3 and #4 is that you, as the leader, must move away from the desire for personal success and redirect your attention to the community's success. This requires you to give up some control and rely on the community members to work with each other directly, make their own decisions, and accepting accountability for their actions. It starts with introducing two members to each other to address a common objective jointly.

Image by:

(Ronald McFarland, CC BY-SA 4.0)

They start working in gatherings of more than two people (beginning with three, including the leader). Sometimes you introduce two people to each other that might be suitable to work on an issue. You slowly exit the discussion and allow them to take over. Get out of the way and let your team members work directly with one another.

Feelings within middle community thinking pattern #4 Image by:

(Ronald McFarland, CC BY-SA 4.0)

After getting two people working on tasks together, you can start introducing more members with specific functions in mind by continually asking those same questions, starting with "Who is in the community?"

These introductions strengthen the community, increase collaboration, and widen inclusivity.

Feelings within late community thinking pattern #4 Image by:

(Ronald McFarland, CC BY-SA 4.0)

After members start working with each other on specific tasks that lead to desired outcomes, a high level of inclusivity and collaboration results. This increases community productivity to a globally superior level, leaving all competition behind in their market.

After identifying the community's thinking, behavior, and language, your action plan is to explore challenges that your team can't work on without involving outside help. Your goal is to find outside collaboration that encourages community thinking pattern #5 culture. This approach is needed when the community has been so competitive that they need a new, more global challenge.

Where once community thinking pattern #4 competed with other communities, now your community is willing to cooperate and collaborate for the overall greater good. This cooperation leads to a more enjoyable working environment. You must encourage and praise your community members every chance you can, and redirects personal praise directed at you to your community members.

Community thinking pattern #5 environment

People with this thinking pattern feel "life is great," according to the researchers. The members of this community have the same pride as thinking pattern #4, but without the competition. Conversely, they hope to collaborate with other communities to achieve a higher calling, like addressing climate change, cancer cures, avoiding war, overcoming disease, or hunger elimination. They seek other communities that have been extremely successful and have similar goals, objectives, and desires.

While in community thinking pattern #5, an organization is so successful that it has no need or desire to compete with other communities. Just competing bores them. A leader in community thinking pattern #5 is rewarded by community trust building, hard work, innovation, and collaboration. You are less interested in direct, personal rewards and praise, and more interested in boosting community, inclusivity, and collaboration.

Feelings within community thinking pattern #5

In this community thinking pattern, people are very successful. They're so successful that there is no more competition or desire to compete with others. Instead, they turn more toward vital, global, philanthropic goals. Most financial desires have weakened, because they have accumulated far more assets than they ever would hope to need. They have lost the desire to "win" against others and want to channel energy toward a higher goal.

Image by:

(Ronald McFarland, CC BY-SA 4.0)

What community are you in?

In your working environment, think about its greater purpose, and consider listening quality, problem-solving quality, ongoing job support, and participation level. Rank those factors from one to four and consider where development is needed. On average, a 3.5 would indicate a competitive thinking pattern #3 community and 4.5 an internally collaborative thinking pattern #4 community. To confirm the group thinking pattern level, also ask yourself what general feeling sticks out among most members:

  • Life is miserable
  • My life is miserable
  • I'm great
  • We're great
  • Life's great

Armed with that information, consider what action is required.

Community thinking patterns in action

This is a fictional story of a man named Bob, a man who experienced all community thinking patterns from 1 to 5 (which few of us do).

Community thinking pattern #1 to #2

Imagine that in Bob's early life, he was abandoned by his family. As an orphan, he grew up in an unsafe environment, and had no one to encourage him to achieve anything beyond basic survival.

You meet Bob by chance, and after getting to know him, you tell Bob that you might be able to help him out. You are the introducer-in-chief.

First, you introduce Bob to Carol, who works as a janitor in a software development company. Carol says she could get Bob a job at this software company, as long as Bob can meet the basic requirements of the position. These are relatively basic requirements: Show up on time, do the work, and so on.

While Bob is doing janitorial work in the software company, he asks some of the code writers and programmers about the work they do. David, one of the programmers, is excited by Bob's interest in software development. He asks Bob whether he'd be interested in learning to program. Bob says, "I've always liked to do troubleshooting work and puzzles, so why not?" So David introduces Bob to a wide range of online software development tutorials. David is a secondary introducer-in-chief for Bob.

Bob studies a spare computer there in the company during his spare time. Mentoring him along, David starts to see real talent in Bob, so he introduces Bob to his college professor, Evelyn. Evelyn also sees the talent and drive in Bob, so she counsels him on how to develop his skills further.

Community thinking pattern #2 to #3

Bob proves himself to be a skilled programmer. Over time, his programs become more and more popular. He starts coding for money as a contractor, and as time passes he needs a staff to handle orders coming in. As sales come in, ideas for new programs materialize as well. He has so many ideas that he's overworked. He also hires staff to do programming, but he controls their assignments and tasks very closely. Even though all he's doing is supervising them, he's mentally exhausted at the end of the day.

Community thinking pattern #3 to #4

Bob goes back to Evelyn and asks what to do. Evelyn introduces him to Francis, an open organization consultant for small teams. Francis instructs Bob to get developers to talk to each other more directly, and let them come up with their own solutions and projects. Francis tells Bob that he has to let go of day-to-day supervising operations and start thinking about the overall operation of the team.

Over time, just like explained in the book Team of Teams, a well-oiled highly-efficient operation is created.

Community thinking pattern #4 to #5

After decades of massive successes, Bob has achieved more than his wildest dreams. His desire to be the best and beat all the competitor weakens. Bob wants to give back in a way that is most globally helpful. He starts approaching people with the same desires and meets Jeff, a global open organization project manager. Jeff specializes in global, multinational, long-term challenges like climate change, global green energy generation, energy waste reduction, food waste reduction, cybersecurity, international relations, world hunger, global migration and many others. Through Jeff's introductions, Bob starts a wide range of multinational projects. Again, just like in the book Team of Teams, Bob has gotten global teams collaborating to address really difficult, long-term challenges. Finally, Bob is at peace knowing how much he has achieved and will continue to achieve well into the future.

What community are you in?

In any working environment, you should think about its greater purpose. Consider listing quality, problem-solving quality, ongoing job support, and participation level. Rank those factors from 1 to 4, and consider where development is needed. On average, a 3.5 indicates a competitive thinking pattern #3 community. A 4.5 indicates an internally-collaborative thinking pattern #4 community.

To confirm the group thinking pattern level, also ask yourself what general feeling sticks out among most members: "Life is miserable", "My life is miserable", "I'm great", "We're great" or "Life's great". Armed with that, consider what action is required.

In the second part of this series, I explore community thinking patterns #4 and #5. Then, I provide a fictional scenario to illustrate thought processes and community influences.

Image by:

Melissa Hogan, CC BY-SA 4.0, via Wikimedia Commons

The Open Organization What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Merge design and code with Penpot

Tue, 01/31/2023 - 22:00
Merge design and code with Penpot sethkenlon Tue, 01/31/2023 - 09:00

For most of the history of computer programming, there's been a gap between the programmers creating an application's code and the designers creating an application's user experience (UX. The two disciplines receive vastly different training, and they use a different set of tools. Programmers use a text editor or an IDE to write code, while designers often draw concepts of widget layout and potential interactions. While some IDEs, like Eclipse and Netbeans, have interface design components, they're usually focused on widget position and not on widget design. The open source design app Penpot is a collaborative design and prototyping platform. It has a suite of new features that make it easy for designers and developers to work together with familiar workflows. Penpot's design interface lets developers write code in harmony with the design process like no other tool does. And it's come a long way since Opensource.com last looked at it. Its latest features don't just improve your experience with Penpot, they propel the open source Penpot app past similar and proprietary tools.

Prototyping with Penpot

One of the common problems with trying to design how an application might work best is that, at the time of designing, the application doesn't exist yet. A designer can visualize and storyboard to help both the design team and the programmer understand what to aim for. But it's a process that requires iteration and feedback as developers start to implement UX concepts, and designs change to react to the reality of code.

With Penpot, you can create a "working" prototype of your web or mobile application. You can connect buttons with specific actions, triggering changes in layout based on user input. And this can all be done before any code for the project exists.

The most important aspect of this isn't the ability to do a mock-up, though. Everything done in Penpot for an app's design has usable layout data that developers can use in the final project. Penpot isn't just a great drawing and layout tool. It informs the coding process.

Rather than providing just a visual list of designer-specific elements, like properties, colors, and typography, Penpot now integrates code output directly into the design workspace (like developer tools in a web browser). Designers and developers share the same space for design and front-end development, getting specifications in whatever format they need.

Image by:

(Andrey Antukh, CC BY-SA 4.0)

Memory unlock

Many online design tools use proprietary technology to provide some fancy features, but at the price of essentially becoming an application, you don't run so much as access through a browser. Penpot uses open web standards, though, and is rendered by your web browser. That means Penpot has access to the web browser's maximum available memory, which makes Penpot the first online prototype and layout application with design scalability. You can provide more options, more mock-ups, and more pitches. Plus, you can open your design space to more concurrent collaborators with no fear of running out of application memory.

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources Self-hosting and SaaS

Penpot is open source, so you don't have to use it on the cloud if that doesn't fit your workflow. You can self-host Penpot easily in a container, using it as a local application on your own workstation or hosting it for your organization on your own server.

Open source design

I've written an introductory article to Penpot previously, and since then the application has only gotten better. If you're looking to bring coders and stakeholders into your design process, then give Penpot a try.

Bridge the gap between programming and design with Penpot, an open source design workspace.

Image by:

Opensource.com

Art and design Open Studio Alternatives What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Use Terraform to manage an OpenStack cluster

Tue, 01/31/2023 - 16:00
Use Terraform to manage an OpenStack cluster ajscanlas Tue, 01/31/2023 - 03:00

After having an OpenStack production and home lab for a while, I can definitively say that provisioning a workload and managing it from an Admin and Tenant perspective is important.

Terraform is an open source Infrastructure-as-Code (IaC) software tool used for provisioning networks, servers, cloud platforms, and more. Terraform is a declarative language that can act as a blueprint of the infrastructure you're working on. You can manage it with Git, and it has a strong GitOps use case.

This article covers the basics of managing an OpenStack cluster using Terraform. I recreate the OpenStack Demo project using Terraform.

Install Terraform

I use CentOS as a jump host, where I run Terraform. Based on the official documentation, the first step is to add the Hashicorp repository:

$ sudo dnf config-manager \ --add-repo https://rpm.releases.hashicorp.com/RHEL/hashicorp.repo

Next, install Terraform:

$ sudo dnf install terraform -y

Verify the installation:

$ terraform –version

If you see a version number in return, you have installed Terraform.

Explore the open source cloud Free online course: Developing cloud-native applications with microservices eBook: Modernize your IT with managed cloud services Try for 60 days: Red Hat OpenShift Dedicated Free online course: Containers, Kubernetes and Red Hat OpenShift What is Kubernetes? Understanding edge computing Latest articles for IT architects Create a Terraform script for the OpenStack provider

In Terraform, you need a provider. A provider is a converter that Terraform calls to convert your .tf into API calls to the platform you are orchestrating.

There are three types of providers: Official, Partner, and Community:

  • Official providers are Hashicorp maintained.
  • Partner providers are maintained by technology companies that partner with Hashicorp.
  • Community providers are maintained by open source community members.

There is a good Community provider for OpenStack in this link. To use this provider, create a .tf file and call it main.tf.

$ vi main.tf

Add the following content to main.tf:

terraform { required_version = ">= 0.14.0" required_providers { openstack = { source = "terraform-provider-openstack/openstack" version = "1.49.0" } } } provider "openstack" { user_name = “OS_USERNAME” tenant_name = “OS_TENANT” password = “OS_PASSWORD” auth_url = “OS_AUTH_URL” region = “OS_REGION” }

You need to change the OS_USERNAME, OS_TENANT, OS_PASSWORD, OS_AUTH_URL, and OS_REGION variables for it to work.

Create an Admin Terraform file

OpenStack Admin files focus on provisioning external networks, routers, users, images, tenant profiles, and quotas.

This example provisions flavors, a router connected to an external network, a test image, a tenant profile, and a user.

First, create an AdminTF directory for the provisioning resources:

$ mkdir AdminTF $ cd AdminTF

In the main.tf, add the following:

terraform { required_version = ">= 0.14.0" required_providers { openstack = { source = "terraform-provider-openstack/openstack" version = "1.49.0" } } } provider "openstack" { user_name = “OS_USERNAME” tenant_name = “admin” password = “OS_PASSWORD” auth_url = “OS_AUTH_URL” region = “OS_REGION” } resource "openstack_compute_flavor_v2" "small-flavor" { name = "small" ram = "4096" vcpus = "1" disk = "0" flavor_id = "1" is_public = "true" } resource "openstack_compute_flavor_v2" "medium-flavor" { name = "medium" ram = "8192" vcpus = "2" disk = "0" flavor_id = "2" is_public = "true" } resource "openstack_compute_flavor_v2" "large-flavor" { name = "large" ram = "16384" vcpus = "4" disk = "0" flavor_id = "3" is_public = "true" } resource "openstack_compute_flavor_v2" "xlarge-flavor" { name = "xlarge" ram = "32768" vcpus = "8" disk = "0" flavor_id = "4" is_public = "true" } resource "openstack_networking_network_v2" "external-network" { name = "external-network" admin_state_up = "true" external = "true" segments { network_type = "flat" physical_network = "physnet1" } } resource "openstack_networking_subnet_v2" "external-subnet" { name = "external-subnet" network_id = openstack_networking_network_v2.external-network.id cidr = "10.0.0.0/8" gateway_ip = "10.0.0.1" dns_nameservers = ["10.0.0.254", "10.0.0.253"] allocation_pool { start = "10.0.0.1" end = "10.0.254.254" } } resource "openstack_networking_router_v2" "external-router" { name = "external-router" admin_state_up = true external_network_id = openstack_networking_network_v2.external-network.id } resource "openstack_images_image_v2" "cirros" { name = "cirros" image_source_url = "https://download.cirros-cloud.net/0.6.1/cirros-0.6.1-x86_64-disk.img" container_format = "bare" disk_format = "qcow2" properties = { key = "value" } } resource "openstack_identity_project_v3" "demo-project" { name = "Demo" } resource "openstack_identity_user_v3" "demo-user" { name = "demo-user" default_project_id = openstack_identity_project_v3.demo-project.id password = "demo" }Create a Tenant Terraform file

As a Tenant, you usually create VMs. You also create network and security groups for the VMs.

This example uses the user created above by the Admin file.

First, create a TenantTF directory for Tenant-related provisioning:

$ mkdir TenantTF $ cd TenantTF

In the main.tf, add the following:

terraform { required_version = ">= 0.14.0" required_providers { openstack = { source = "terraform-provider-openstack/openstack" version = "1.49.0" } } } provider "openstack" { user_name = “demo-user” tenant_name = “demo” password = “demo” auth_url = “OS_AUTH_URL” region = “OS_REGION” } resource "openstack_compute_keypair_v2" "demo-keypair" { name = "demo-key" public_key = "ssh-rsa ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ" } resource "openstack_networking_network_v2" "demo-network" { name = "demo-network" admin_state_up = "true" } resource "openstack_networking_subnet_v2" "demo-subnet" { network_id = openstack_networking_network_v2.demo-network.id name = "demo-subnet" cidr = "192.168.26.0/24" } resource "openstack_networking_router_interface_v2" "demo-router-interface" { router_id = “XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX” subnet_id = openstack_networking_subnet_v2.demo-subnet.id } resource "openstack_compute_instance_v2" "demo-instance" { name = "demo" image_id = "YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY" flavor_id = "3" key_pair = "demo-key" security_groups = ["default"] metadata = { this = "that" } network { name = "demo-network" } }Initialize your Terraform

After creating the Terraform files, you need to initialize Terraform.

For Admin:

$ cd AdminTF $ terraform init $ terraform fmt

For Tenants:

$ cd TenantTF $ terraform init $ terraform fmt

Command explanation:

  • terraform init downloads the provider from the registry to use in provisioning this project.
  • terraform fmt formats the files for use in repositories.
Create a Terraform plan

Next, create a plan for you to see what resources will be created.

For Admin:

$ cd AdminTF $ terraform validate $ terraform plan

For Tenants:

$ cd TenantTF $ terraform validate $ terraform plan

Command explanation:

  • terraform validate validates whether the .tf syntax is correct.
  • terraform plan creates a plan file in the cache where all managed resources can be tracked in creation and destroy.
Apply your first TF

To deploy the resources, use the terraform apply command. This command applies all resource states in the plan file.

For Admin:

$ cd AdminTF $ terraform apply

For Tenants:

$ cd TenantTF $ terraform applyNext steps

Previously, I wrote an article on deploying a minimal OpenStack cluster on a Raspberry Pi. You can discover how to have more detailed Terraform and Ansible configurations and implement some CI/CD with GitLab.

Terraform is a declarative language that can act as a blueprint of the infrastructure you're working on.

Image by:

Opensource.com

Cloud OpenStack Automation CI/CD What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Pages