Open-source News

AMD Graphics Driver Surpassing 4 Million Lines Of Code In Linux 5.19, NVIDIA Opens Up At 1 Million

Phoronix - Thu, 05/12/2022 - 20:30
Given the NVIDIA open-source kernel driver code announcement from yesterday and also the Linux 5.19 merge window coming up soon with a host of AMDGPU/AMDKFD kernel driver improvements and starting to prepare support for RDNA3, it's time for some fun with numbers around driver sizes...

Godot 4.0 Alpha 8 Game Engine Released With Some Nice Improvements

Phoronix - Thu, 05/12/2022 - 19:22
Godot 4.0 continues working its way towards release as the most acclaimed open-source game engine. Godot 4.0 brings Vulkan rendering, OpenXR support, and a ton of other features covered in the past few years for making it more competitive with commercial game engines. Out this morning is Godot 4.0 Alpha 8 with a few more improvements worth noting...

More AMD RDNA3 Code Prepared For Linux 5.19, RADV Begins Landing Task Shaders

Phoronix - Thu, 05/12/2022 - 18:39
While open-source fans this morning are celebrating NVIDIA finally publishing open-source kernel driver code as a step to opening up their driver, open-source AMD Radeon driver developers are proceeding as normal and undeterred by NVIDIA's open kernel-only approach. Another batch of AMD graphics code was sent in this morning to DRM-Next and then over in user-space Mesa's RADV Vulkan driver has landed more task shader code...

Microsoft Releases CBL-Mariner 1.0 May 2022 Linux Distro Update

Phoronix - Thu, 05/12/2022 - 17:41
While this week Microsoft issued a production release of CBL-Mariner 2.0 as its in-house Linux distribution, they are continuing to maintain CBL-Mariner 1.0 for the time being and have overnight issued its newest monthly release...

NVIDIA CUDA 11.7 Brings Lazy Loading, Open GPU Kernel Driver Compatibility

Phoronix - Thu, 05/12/2022 - 17:16
Released on Wednesday alongside the R515 NVIDIA Linux driver beta and the open-source NVIDIA GPU kernel driver announcement was the launch of CUDA 11.7...

How collaborative commons and open organization principles align

opensource.com - Thu, 05/12/2022 - 15:00
How collaborative commons and open organization principles align Ron McFarland Thu, 05/12/2022 - 03:00 1 reader likes this 1 reader likes this

I have read Jeremy Rifkin's book The Zero Marginal Cost Society: The Internet of Things, the Collaborative Commons, and the Eclipse of Capitalism, which has a strong connection to open organization principles, particularly community building. Rifkin also writes about the future of green energy generation and energy use in logistics. This is the first of three articles in this series. In this article I'll talk about collaborative commons. In the next, I'll talk about its impact on energy production and supply. In the last article, I will look at other economic systems like logistics.

Rifkin believes that the capitalist economy is slowly passing, and the collaborative commons is increasing in importance in the global economy resulting in a part capitalist market and part collaborative commons (like Open Organization Communities). Within these collaborative commons are "social impact-focused organizations" that Laura Hilliger and Heather Leson wrote about in their articles on these organizations. Rifkin thinks these commons are finding synergies where they can add value to one another, while benefiting themselves. At other times, they are deeply adversarial, each attempting to absorb or replace the other.

Rifkin feels that the organizational top-down, centralized capitalist system for the day-to-day commercial, social, and political life of society, that has lasted over more than ten generations, has peaked and begun its slow decline. Capitalist systems will remain part of the social order for at least the next half century, but the collaborative commons will ascend and play a major role by 2050 around most of the globe.

A changing supply environment

Competition will improve productivity, drive down prices and costs to the point where there are "near zero" marginal costs. The cost of actually producing each additional unit after initial fixed costs (purchasing equipment, technology and all start-up expenses) brings the cost to near zero. This causes the production, and making of the product to be nearly free. I have always called "marginal costs" a variable cost. I looked both up, and they are very similar, but the calculation is only slightly different. The impact is the same.

In products that achieve near zero marginal costs, profits (the lifeblood of capitalism) will dry up. That is, if you consider profits as the only motivating factor to supply the product. In a market-exchange economy, profits are made through the gap between cost (variable and fixed) and selling price. Without that gap, there is no financial market. Industries like publishing, communications, camera film, entertainment have seen that gap disappear.

In these industries that have very little gap between costs and selling price, the collaborative network (commons, community, association, or cooperative) comes to life. They serve their community for other reasons than just making a profit (offering value, solving local problems). There is never 100% of one and 0% of the other, but these collaborative networks have a higher share of society giving over receiving and profiting. I think this is the same with Laura Hilliger's and Heather Leson's social impact-focused organizations that I mentioned above.

Zero marginal cost is impacting many for-profit industries, particularly renewable energy, information gathering and computing power, 3D printing, manufacturing, online higher education, and money transfers. They are becoming "prosumers", producing, consuming, and sharing the rest.

Product by product, industry by industry, service by service, while up-front costs (fixed, initial costs, and investment) are still high, they are coming down so much that individuals, creative commons, communities, and cooperatives can invest, not just large corporations or governments.

From this point on, marginal cost reduction is entering into physical goods and services, not just the information economy. There will be more give away items that will draw people to other items that can be purchased as well.

As society moves closer to a near zero marginal cost society, capitalism will be less dominant than today. Rifkin says we will move to a society of abundance over scarcity, a society where most things can be freely shared without concern for getting a return on investment for supplying the goods.

Open Organization resources Download resources Join the community What is an open organization? How open is your organization? Changing economic paradigm

The assumptions regarding the best way to supply goods and services has to change if the marginal cost goes down to near zero. That demand is still there, but the cost of supplying it is near zero and the supply far exceeds the quantity needed or demanded. The mindset should not be on profiting (marketing benefit) for the supply of the item, but more on just the joy of providing it (social benefit). This is aligned with the social impact-focused organizations that Laura Hilliger and Heather Leson wrote about.

The capitalist model is under siege on two fronts:

  1. Interdisciplinary scholarship: like ecological science, chemistry, biology, engineering, architecture, urban planning, and information technology are all adding new concerns to business models, because many are external to the basic equipment and labor cost model. Other environmental factors are coming into play.

  2. New information technology platforms: are weakening centralized control of major heavy industries. The coming together of the communication internet with the fledgling energy (producing, sharing, consuming) internet and the logistics (moving, storing, sharing) internet in a seamless 21st century intelligent infrastructure (IoT) is giving rise to a new industrial revolution. An economy based on scarcity is slowly giving way to an economy of abundance.

According to Rifkin, the IoT will connect everything with everyone in an integrated global network. People, machines, natural resources, production lines, logistics networks, consumption habits, recycling flows, and waste analysis will all be linked by sensors, cameras, monitors, robots, and software with advanced analytics to make determinations. This will make many items to go down to near zero marginal costs. Researchers are looking at this now, like the Internet of Things European Research Cluster. Their "Discover" journal series is committed to providing a streamlined submission process, rapid review and publication, and a high level of author service at every stage. It is an open access, community-focussed journal, publishing research from across all fields relevant to the IoT. It provides cutting-edge and state-of-art research findings to researchers, academicians, students, and engineers. Europe is also studying this as well.

Smart cities are those that build structural health sensors, as well as noise pollution sensors, parking space availability sensors, and sensors in garbage cans to optimize waste collection. There will be sensors in vehicles to gather useful information to reduce travel risks and insurance rates. Sensors in forests will determine the chance of fire. There will be sensors in farm soil, on animals to determine migration trails, in rivers to determine quality of water, sensors on produce to track whereabouts and sniff spoilage, sensors in humans to monitor bodily functions (heart rate, body temperature, skin coloration), and security systems to reduce crime. Many companies are developing these systems, like General Electric's "Industrial Internet", Cisco's "Internet of Everything", IBM's "Smart Planet" and Siemens' "Sustainable Cities".

All these companies are connecting neighborhoods, cities, regions, and continents in what is called "a global neural network" designed to be open to all, distributed, and collaborative allowing anyone, anywhere to tap into Big Data.

Rifkin writes that these systems will marshal resources, production systems, distribution systems, and recycling of waste. Without communication, economic activities cannot be managed. Without energy, information can't be generated and transportation can't be powered. Without logistics, economic activity can't be moved across a supply chain.

The commons existed before capitalist markets or representative government. The contemporary commons are where billions of people engage in the deeply social aspects of life, like charities, religious bodies, arts and cultural groups, educational foundations (schools), amateur sports clubs, producers and consumer cooperatives, credit unions, health-care organizations (hospitals), crowdfunding communities, advocacy groups, and condominium associations.

Notice these are all community based, and have many open organization community principles. In all these organizations, all members are partly owners, managers, workers, and customers (users). There are no salvos between them and their goals are more aligned. The needs of the users must be the strongest of all as that is the greatest community purpose.

Up until now, social commons have been considered the third sector, behind markets and governments. But as time goes on, Rifkin thinks it may grow in importance, as required capital investments will come down to a level that local communities can handle.

While capitalist markets are based mainly on self-interest and driven by material gains, the social commons are motivated by collaborative interests and driven by a deep desire to connect with others and share (open-source, innovation, transparency, and community).

Rifkin writes that the IoT is the technical match for the emergence of the collaborative commons, as it is configured to be distributed, peer-to-peer in nature in order to facilitate collaboration, universal access and sharing, inclusion, and the search for synergies. It is moving from sales markets to social networks, from things owned to things utilized, from individual interests to collaborative interests, and from dreams of going from rags to riches to dreams of a sustainable quality life for all.

GDP and social value measurements

All the value of sharing in communities will impact the GDP, as their value is not economically measured. Therefore, new measurements are required to include educational growth, healthcare, infant mortality, life expectancy, environmental stewardship, human rights, democratic participation, volunteerism, leisure time, poverty, and equitable distribution of wealth.

New kind of incentives

Rifkin thinks that the democratization of innovation and creativity on the emerging collaborative commons is spawning a new kind of incentive, based less on the expectation of financial reward and more on the desire to advance the social well-being of humanity. The collaborative effort will result in expanded human participation and creativity across society and flatten the way we organize institutions (like social impact-focused organizations).

Energy and social impact-focused organizations

I'll talk more about this in the second part of this series, but top-down command and control of fossil fuels are only found in certain places and require centralized management to move them and are very capital intensive. Distributed energies are now leading to local empowerment through the development of collaborative commons. These laterally scaled communities will start to break up vertically integrated companies and monopolies.

These distributed renewable energies have to be organized collaboratively and shared peer-to-peer across communities and regions to create sufficient lateral economies of scale to bring their marginal cost to zero for everyone in society.

The beginning of capitalism, central, top-down control, and massive investments

Whether a society is communist, socialist or capitalist, in the past in order for industrial revolutions to advance economic development, massive investment was required on a centralized, vertical top-down structure.

According to Rifkin, in the next industrial revolution that is starting, those massive costs start coming down so local cooperatives can invest, manage, and control their economic development. Initial investments can be financed by hundreds of millions of individual peer-to-peer networks which will be doable for everyone. But it must be for goods that have marginal (variable) costs of generating, storing, sharing communications and energy at nearly zero. They are smart public infrastructures. These infrastructures will be laterally integrated networks on the collaborative commons, rather than vertically integrated businesses in the capitalist market. They will be social enterprises (open organizations) connected to the IoT. They will utilize an open, distributed, and collaborative architect to create peer-to-peer lateral economies of scale that eliminate virtually all the remaining middlemen. It will be the start of the production and distribution of very close to free goods.

There is a changing supply environment and the world will have to adjust to it. In one area, organizational models will have to change. Furthermore, new ways of thinking and incentivizing have to be developed. In the next part of this series, I'll take a look at energy, education, and other expenses in more detail regarding near zero marginal cost and the communities they develop. Much of our current energy and other costs are moving in that direction.

In his book, The Zero Marginal Cost Society: The Internet of Things, the Collaborative Commons, and the Eclipse of Capitalism, Jeremy Rifkin explores the rise of collaborative commons in the global economy.

Image by:

Opensource.com

The Open Organization What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Get started with Bareos, an open source client-server backup solution

opensource.com - Thu, 05/12/2022 - 15:00
Get started with Bareos, an open source client-server backup solution Heike Jurzik Thu, 05/12/2022 - 03:00 1 reader likes this 1 reader likes this

Bareos (Backup Archiving Recovery Open Sourced) is a distributed open source backup solution (licensed under AGPLv3) that preserves, archives, and recovers data from all major operating systems.

Bareos has been around since 2010 and is (mainly) developed by the company Bareos GmbH & Co. KG, based in Cologne, Germany. The vendor not only provides further development as open source software but also offers subscriptions, professional support, development, and consulting. This article introduces Bareos, its services, and basic backup concepts. It also describes where to get ready-built packages and how to join the Bareos community.

Modular design

Bareos consists of several services and applications which communicate securely over the network: the Bareos Director (Dir), one or more Storage Daemons (SD), and File Daemons (FD) installed on the client machines to be backed up. This modular design makes Bareos flexible and scalable—it's up to you whether to install all components on one system or several hundred computers, even in different locations. The client-server software stores backups on all kinds of physical and virtual storage (HDD/SSD/SDS), tape libraries, and in the cloud. Bareos includes several plug-ins to support virtual infrastructures, application servers (like databases, such as PostgreSQL, MySQL, MSSQL, MariaDB, etc.), and LDAP directory services.

Here are the Bareos components, what they do, and how they work together:

Image by:

(Heike Jurzik, CC BY-SA 4.0)

Bareos Director

This is the core component and the control center of Bareos, which manages the database (i.e., the Catalog), clients, file sets (defining the data in the backups), the plug-ins' configuration, backup jobs and schedules, storage and media pools, before and after jobs (programs to be executed before or after a backup/restore job), etc.

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources Catalog

The database maintains a record of all backup jobs, saved files, and backup volumes. Bareos uses PostgreSQL as the database backend.

File Daemon

The File Daemon (FD) runs on every client machine or the virtual layer to handle backup and restore operations. After the File Daemon has received the director's instructions, it executes them and then transmits the data to (or from) the Storage Daemon. Bareos offers client packages for various operating systems, including Windows, Linux, macOS, FreeBSD, Solaris, and other Unix-based systems on request.

Storage Daemon

This Storage Daemon (SD) receives data from one or more FDs and stores data on the configured backup medium. The SD runs on the machine handling the backup devices. Bareos supports backup media like hard disks and flash arrays, tapes and tape libraries, and S3-compatible cloud solutions. If there is a media changer involved, the SD controls that device as well. The SD sends the correct data back to the requesting File Daemon during the restore process. To increase flexibility, availability, and performance, there can be multiple SDs, for example, one per location.

Jobs and schedules

A backup job in Bareos describes what to back up (in a so-called FileSet directive on the client), when to back up (Schedule directive), and where to back up the data (Pool directive). This modular design lets you define multiple jobs and combine several directives, such as FileSets, Pools, and Schedules. Bareos allows you to have two different job resources managing various servers but using the same Schedule and FileSet, maybe even the same Pool.

The schedule not only sets the backup type (full, incremental, or differential) but also describes when a job is supposed to run, i.e., on different days of the week or month. Because of that, you can plan a detailed schedule and run full backups every Monday, incremental backups the rest of the week, etc. If more than one backup job uses the same schedule, you can set the job priority and thus tell Bareos which job is supposed to run first.

Encrypted communication

As mentioned, all Bareos services and applications communicate with each other over the network. Bareos provides TLS/SSL with pre-shared keys or certificates to ensure encrypted data transport. On top of that, Bareos can encrypt and sign data on the File Daemons before sending the backups to the Storage Daemon. Encryption and signing on the clients are implemented using RSA private keys combined with X.509 certificates (Public Key Infrastructure). Before the restore process, Bareos validates file signatures and reports any mismatches. Neither the Director nor the Storage Daemon has access to unencrypted content.

As a Bareos administrator, you can communicate with the backup software using a command-line interface (bconsole) or your preferred web browser (Bareos WebUI). The multilingual web interface manages multiple Bareos Directors and their databases. Also, it's possible to configure role-based access and create different profiles with ACLs (Access Control Lists) to control what a user can see and execute in the WebUI.

Image by:

(Heike Jurzik, CC BY-SA 4.0)

The WebUI provides an overview and detailed information about backup jobs, clients, file sets, pools, volumes, and more. It's also possible to start backup and restore jobs via the web interface. Starting with Bareos 21, the WebUI provides a timeline to display selected jobs. This timeline makes it easy to spot running, finished, or even failed jobs. This is a great feature, especially in larger environments, as it lets you detect gaps in the schedule or identify which backup jobs are taking up the most time.

Packages, support, and training

There are no license fees for using Bareos. In addition to the Bareos source code, which is available on GitHub, the vendor provides Bareos packages in two different repositories:

  • The community repository contains packages for all major releases (without support).
  • The subscription repository also offers packages for minor releases with updates, bug fixes, etc., for customers with a Bareos subscription.

Customers with a valid subscription can also buy support and consulting from the manufacturer or sponsor the development of new features. Bareos GmbH & Co. KG has a global partner network, offering support and training in multiple languages.

Join the Bareos community

Bareos is a very active open source project with a great community. The source code of the software and the Bareos manual sources are hosted on GitHub, and everyone is welcome to contribute. Bareos also offers two mailing lists, one for users (bareos-users) and one for developers (bareos-devel). For news and announcements, technical guides, quick howtos, and more, you can also follow the Bareos blog.

Bareos preserves, archives, and recovers data from all major operating systems. Discover how its modular design and key features support flexibility, availability, and performance.

Image by:

Opensource.com

Alternatives What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

5 reasons to use sudo on Linux

opensource.com - Thu, 05/12/2022 - 15:00
5 reasons to use sudo on Linux Seth Kenlon Thu, 05/12/2022 - 03:00 1 reader likes this 1 reader likes this

On traditional Unix and Unix-like systems, the first and only user that exists on a fresh install is named root. Using the root account, you log in and create secondary "normal" users. After that initial interaction, you're expected to log in as a normal user.

Running your system as a normal user is a self-imposed limitation that protects you from silly mistakes. As a normal user, you can't, for instance, delete the configuration file that defines your network interfaces or accidentally overwrite your list of users and groups. You can't make those mistakes because, as a normal user, you don't have permission to access those important files. Of course, as the literal owner of a system, you could always use the su command to become the superuser (root) and do whatever you want, but for everyday tasks you're meant to use your normal account.

Using su worked well enough for a few decades, but then the sudo command came along.

To a longtime superuser, the sudo command might seem superfluous at first. In some ways, it feels very much like the su command. For instance, here's the su command in action:

$ su root
<enter passphrase>
# dnf install -y cowsay

And here's sudo doing the same thing:

$ sudo dnf install -y cowsay
<enter passphrase>

The two interactions are nearly identical. Yet most distributions recommend using sudo instead of su, and most major distributions have eliminated the root account altogether. Is it a conspiracy to dumb down Linux?

Far from it, actually. In fact, sudo makes Linux more flexible and configurable than ever, with no loss of features and several significant benefits.

[ Download the cheat sheet: Linux sudo command ]

Why sudo is better than root on Linux

Here are five reasons you should be using sudo instead of su.

1. Root is a confirmed attack vector

I use the usual mix of firewalls, fail2ban, and SSH keys to prevent unwanted entry to the servers I run. Before I understood the value of sudo, I used to look through logs with horror at all the failed brute force attacks directed at my server. Automated attempts to log in as root are easily the most common, and with good reason.

An attacker with enough knowledge to attempt a break-in also would also know that, before the widespread use of sudo, essentially every Unix and Linux system had a root account. That's one less guess about how to get into your server an attacker has to make. The login name is always right, as long as it's root, so all an attacker needs is a valid passphrase.

Removing the root account offers a good amount of protection. Without root, a server has no confirmed login accounts. An attacker must guess at possible login names. In addition, the attacker must guess a password to associate with a login name. That's not just one guess and then another guess; it's two guesses that must be correct concurrently.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles 2. Root is the ultimate attack vector

Another reason root is a popular name in failed access logs is that it's the most powerful user possible. If you're going to set up a script to brute force its way into somebody else's server, why waste time trying to get in as a regular user with limited access to the machine? It only makes sense to go for the most powerful user available.

By being both the singularly known user name and the most powerful user account, root essentially makes it pointless to try to brute force anything else.

3. Selective permission

The su command is all or nothing. If you have the password for su root, you can become the superuser. If you don't have the password for su, you have no administrative privileges whatsoever. The problem with this model is that a sysadmin has to choose between handing over the master key to their system or withholding the key and all control of the system. That's not always what you want. Sometimes you want to delegate.

For example, say you want to grant a user permission to run a specific application that usually requires root permissions, but you don't want to give this user the root password. By editing the sudo configuration, you can allow a specific user, or any number of users belonging to a specific Unix group, to run a specific command. The sudo command requires a user's existing password, not your password, and certainly not the root password.

4. Time out

When running a command with sudo, an authenticated user's privileges are escalated for 5 minutes. During that time, they can run the command or commands you've given them permission to run.

After 5 minutes, the authentication cache is cleared, and the next use of sudo prompts for a password again. Timing out prevents a user from accidentally performing that action later (for instance, a careless search through your shell history or a few too many Up arrow presses). It also ensures that another user can't run the commands if the first user walks away from their desk without locking their computer screen.

5. Logging

The shell history feature serves as a log of what a user has been doing. Should you ever need to understand how something on your system happened, you could (in theory, depending on how shell history is configured) use su to switch to somebody else's account, review their shell history, and maybe get an idea of what commands a user has been executing.

If you need to audit the behavior of 10s or 100s of users, however, you might notice that this method doesn't scale. Shell histories also rotate out pretty quickly, with a default age of 1,000 lines, and they're easily circumvented by prefacing any command with an empty space.

When you need logs on administrative tasks, sudo offers a complete logging and alerting subsystem, so you can review activity from a centralized location and even get an alert when something significant happens.

Learn the features

The sudo command has even more features, both current and in development, than what I've listed in this article. Because sudo is often something you configure once then forget about, or something you configure only when a new admin joins your team, it can be hard to remember its nuances.

Download our sudo cheat sheet and use it as a helpful reminder for all of its uses when you need it the most.

Here are five security reasons to switch to the Linux sudo command. Download our sudo cheat sheet for more tips.

Image by:

Opensource.com

Linux Sysadmin Cheat sheets What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Pages