Open-source News

Open Mainframe Project Announces Schedule for the 3rd Annual Open Mainframe Summit on September 21-21 in Philadelphia, PA

The Linux Foundation - Wed, 07/13/2022 - 21:45

 The first-ever in-person Summit will focus on security, training, AI, Linux on Z and Cloud Native  and will be accessible online for attendees around the world

SAN FRANCISCO, July 13, 2022 The Open Mainframe Project, an open source initiative that enables collaboration across the mainframe community to develop shared tool sets and resources, announces the schedule for the 3rd annual Open Mainframe Summit, which will be in-person in Philadelphia, PA, and streaming online for global attendees. This year’s theme focuses on security, which is top of mind for every company that uses mainframes.

Critical enterprise systems are more connected than ever, which means vulnerabilities have increased. In fact, according to The Essential Holistic Security Strategy, a recent report by Forrester Consulting, commissioned by Open Mainframe Project Silver Member BMC, 81 percent of organizations surveyed are prioritizing the integration of security functions and improving security detection and response.

This year will highlight security as it relates to all aspects of mainframes and beyond including cloud native services, automation, software supply chain management and more. The Summit will also highlight projects such as Zowe and COBOL, education and training topics that will offer seasoned professionals, developers, students and thought leaders an opportunity to share best practices and network with like-minded individuals.

Some of the security sessions include:

Additionally, David Wheeler, Open Source Supply Chain Security Director at the Linux Foundation, will also give a keynote.  

Other highlights include:

See the full conference schedule here.

Open Mainframe Project would like to thank this year’s Open Mainframe Summit planning committee including Alan Clark, CTO Office and Director for Industry Initiatives, Emerging Standards and Open Source at SUSE; Donna Hudi, Chief Marketing Officer at Phoenix Software; Elizabeth K. Joseph, Developer Advocate at IBM; and Michael Bauer, Staff Product Owner at Broadcom, Inc.

Early bird pricing ($500 US) for in-person attendees ends on July 15. Registration for academia is $50 for in-person and $15 for a virtual pass. Register here.

Open Mainframe Summit is made possible thanks to Platinum Sponsors Broadcom Mainframe Software, IBM, and SUSE and Gold Sponsors BMC, Micro Focus and Vicom Infinity, a Converge Company. For information on becoming an event sponsor, click here by August 5. 

Members of the press who would like to request a press pass to attend should contact Maemalynn Meanor at maemalynn@linuxfoundation.org.

About the Open Mainframe Project

The Open Mainframe Project is intended to serve as a focal point for deployment and use of Linux and Open Source in a mainframe computing environment. With a vision of Open Source on the Mainframe as the standard for enterprise class systems and applications, the project’s mission is to build community and adoption of Open Source on the mainframe by eliminating barriers to Open Source adoption on the mainframe, demonstrating value of the mainframe on technical and business levels, and strengthening collaboration points and resources for the community to thrive. Learn more about the project at https://www.openmainframeproject.org.

About The Linux Foundation

Founded in 2000, the Linux Foundation and its projects are supported by more than 2,950 members. The Linux Foundation is the world’s leading home for collaboration on open source software, hardware, standards, and data. Linux Foundation projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, ONAP, Hyperledger, RISC-V, and more. The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users, and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see its trademark usage page: www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

###

A

O

The post Open Mainframe Project Announces Schedule for the 3rd Annual Open Mainframe Summit on September 21-21 in Philadelphia, PA appeared first on Linux Foundation.

Benchmarking The Linux 5.19 Kernel Built With "-O3 -march=native"

Phoronix - Wed, 07/13/2022 - 18:30
Following the upstream discussions over -O3'ing the Linux kernel last month I ran some fresh benchmarks of the Linux kernel built with -O2 versus -O3. After the -O3 optimized kernel build results weren't too impressive, a number of Phoronix readers were virtually shouting that "-O3 -march=native" is where it's at for fun and performance... To appease those even though in the past it hasn't proven worthwhile and upstream kernel developers are against it, here are those numbers...

Open-Source NVIDIA "Nouveau" Driver Sees Few Changes For Linux 5.20

Phoronix - Wed, 07/13/2022 - 17:19
There hasn't been much to report on lately for the reverse-engineered Nouveau driver providing open-source NVIDIA GPU driver support on Linux. Several recent Linux kernel series haven't even seen any Nouveau DRM/KMS driver pull requests with changes. For the upcoming Linux 5.20 cycle, a Nouveau set of changes were sent in today to DRM-Next but it's quite tiny...

Intel oneAPI GPU Rendering Appears Ready For Blender 3.3

Phoronix - Wed, 07/13/2022 - 17:00
Intel's effort to add oneAPI/SYCL support to Blender for GPU acceleration with forthcoming Arc Graphics hardware appears all buttoned up for the upcoming Blender 3.3 release...

FDC3 2.0 Drives Desktop Interoperability Across the Financial Services Ecosystem

The Linux Foundation - Wed, 07/13/2022 - 16:45
The Fintech Open Source Foundation builds on the success of FDC3, its most adopted open source project to date

New York, NY – July 13, 2022 – The Fintech Open Source Foundation (FINOS), the financial services umbrella of the Linux Foundation, announced today during its Open Source in Finance Forum (OSFF) London the launch of FDC3 2.0. FDC3 supports efficient, streamlined desktop interoperability between financial institutions with enhanced connectivity capabilities. 

The global FDC3 community is fast-growing and includes application vendors, container vendors, a large presence from sell-side firms and a growing participation from buy-side firms all collaborating together on advancing the standard. 

You can check out all the community activity here: http://fdc3.finos.org/community

The latest version of the standard delivers universal connectivity to the financial industry’s desktop applications with a significant evolution of all four parts of the Standard: the Desktop Agent API, the App Directory providing access to apps and the intent and context messages that they exchange. 

MAIN IMPROVEMENTS

  • FDC3 2.0 significantly streamlines the API for both app developers and desktop agent vendors alike, refining the contract between these two groups based on the last three years’ working with FDC3 1.x. 
  • Desktop agents now support two-way data-flow between apps (both single transactions and data feeds), working with specific instances of apps and providing metadata on the source of messages – through an API that has been refined through feedback from across the FDC3 community.
  • This updated version also redefines the concept of the “App Directory”, simplifying the API, greatly improving the App Record and the discoverability experience for users and making the App Directory fit-for-purpose for years to come (and the explosion of vendor interest FDC3 is currently experiencing).
  • Finally, FDC3 2.0 includes a host of new standard intents and context, which define and standardize message exchanges for a range of very common workflows, including interop with CRMs, Communication apps (emails, calls, chats), data visualization tools, research apps and OMS/EMS/IMS systems. This is one of the most exciting developments as it represents diverse parts of the financial services software industry working together through the standard.

MAIN USES

  • Help Manage Information Overload. Finance is an information-dense environment. Typically, traders will use several different displays so that they can keep track of multiple information sources at once. FDC3 helps with this by sharing the “context” between multiple applications, so that they collectively track the topic the user is focused on.
  • Work Faster. FDC3 standardizes a way to call actions and exchange data between applications (called “intents”). Applications can contribute intents to each other, extending each other’s functionality. Instead of the user copy-and-pasting bits of data from one application to another, FDC3 makes sure the intents have the data they need to seamlessly transition activity between applications.
    • Platform-Agnostic. As an open standard, FDC3 can be implemented on any platform and in any language. All that is required is a “desktop agent” that supports the FDC3 standard, which is responsible for coordinating application interactions. FDC3 is successfully running on Web and Native platforms in financial institutions around the world.
  • End the integration nightmare. By providing support for FDC3, vendors and financial organizations alike can avoid the bilateral or trilateral integration projects that plague desktop app roll-out, cause vendor lock-in and result in a slow pace of change on the Financial Services desktop.

“It is very rewarding to see the recent community growth around FDC3,” said Jane Gavronsky, CTO of FINOS. “More and more diverse participants in the financial services ecosystem recognize the key role a standard such as FDC3 plays for achieving a true open financial services ecosystem. We are really excited about FDC3 2.0 and the potential for creating concrete, business-driven use cases that it enables.” 

What this means for the community 

“The wide adoption of the FDC3 standard shows the relevance of the work being conducted by FINOS. At Symphony we are supporters and promoters of this standard. This latest version, FDC3 2.0, and its improvements demonstrate substantial progress in this work and its growing importance to the financial services industry,” said Brad Levy, Symphony CEO.

“The improvements to the App Directory and its ramifications for market participants and vendors are game-changing enough in themselves to demand attention from everyone: large sell-sides with large IT departments, slim asset managers who rely on vendor technology, and vendors themselves”, said Jim Bunting, Global Head of Partnerships, Cosaic.

“FDC3 2.0 delivers many useful additions for software vendors and financial institutions alike. Glue42 continues to offer full support for FDC3 in its products. For me, the continued growth of the FDC3 community is the most exciting development”, said Leslie Spiro, CEO, Tik42/Glue42. “For example recent contributions led by Symphony, SinglePoint and others have helped to extend the common data contexts to cover chat and contacts; this makes FDC3 even more relevant and strengthens our founding goal of interop ‘without requiring prior knowledge between apps”. 

“Citi is a big supporter of FDC3 as it has allowed us to simplify how we create streamlined intelligent internal workflows, and partner with strategic clients to improve their overall experience by integrating each other’s services. The new FDC3 standard opens up even more opportunities for innovation between Citi and our clients,” said Amit Rai, Technology Head of Markets Digital & Enterprise Portal Framework at Citi.

“FDC3 has allowed us to build interoperability within our internal application ecosystem in a way that will allow us to do the same with external applications as they start to incorporate these standards,” said Bhupesh Vora, European Head of Capital Markets Technology, Royal Bank of Canada. “The next evolution of FDC3 will ensure we continue to build richer context sharing capabilities with our internal applications and bring greater functionality to our strategic clients through integration with the financial application ecosystem for a more cohesive experience overall.”

“Interoperability allows the Trading team to take control of their workflows, allowing them to reduce the time it takes to get to market. In addition they are able to generate alpha by being able to quickly sort vast, multiple sources of data,” said Carl James, Global Head of Fixed Income Trading, Pictet Asset Management. 

As FINOS sees continued growth and contribution to the FDC3 standard, the implementation of FDC3 2.0 will allow more leading financial institutions to take advantage of enhanced desktop interoperability. The contribution of continued updates also represents the overall wider adoption of open source technology, as reported in last year’s 2021 State of Open Source in Financial Services annual survey. To get involved in this year’s survey, visit https://www.research.net/r/ZN7JCDR to share key insights into the ever-growing open source landscape in financial services. 

Skill up on FDC3 by taking the free Linux Foundation’s FDC3 training course, or contact us at https://www.finos.org/contact-us. Hear from Kris West, Principal Engineer at Cosaic and Lead Maintainer of FDC3 on the FINOS Open Source in Finance Podcast, where he discusses why it was important to change the FDC3 standard in order to keep up with the growing amount of use cases end users are contributing to the community.

About FINOS

FINOS (The Fintech Open Source Foundation) is a nonprofit whose mission is to foster adoption of open source, open standards and collaborative software development practices in financial services. It is the center for open source developers and the financial services industry to build new technology projects that have a lasting impact on business operations. As a regulatory compliant platform, the foundation enables developers from these competing organizations to collaborate on projects with a strong propensity for mutualization. It has enabled codebase contributions from both the buy- and sell-side firms and counts over 50 major financial institutions, fintechs and technology consultancies as part of its membership. FINOS is also part of the Linux Foundation, the largest shared technology organization in the world.

The post FDC3 2.0 Drives Desktop Interoperability Across the Financial Services Ecosystem appeared first on Linux Foundation.

A guide to productivity management in open source projects

opensource.com - Wed, 07/13/2022 - 15:00
A guide to productivity management in open source projects Thabang Mashologu Wed, 07/13/2022 - 03:00 2 readers like this 2 readers like this

Open source is one of the most important technology trends of our time. It’s the lifeblood of the digital economy and the preeminent way that software-based innovation happens today. In fact, it’s estimated that over 90% of software released today contains open source libraries.

There's no doubt the open source model is effective and impactful. But is there still room for improvement? When comparing the broader software industry’s processes to that of open source communities, one big gap stands out: productivity management.

By and large, open source project leads and maintainers have been slow to adopt modern productivity and project management practices and tools commonly embraced by startups and enterprises to drive the efficiency and predictability of software development processes. It’s time we examine how the application of these approaches and capabilities can improve the management of open source projects for the better.

Understanding productivity in open source software development

The open source model, at its heart, is community-driven. There is no single definition of success for different communities, so a one-size-fits-all approach to measuring success does not exist. And what we have traditionally thought of as productivity measures for software development, like commit velocity, the number of pull requests approved and merged, and even the lines of code delivered, only tell part of the story.

Open source projects are people-powered. We need to take a holistic and humanistic approach to measuring productivity that goes beyond traditional measures. I think this new approach should focus on the fact that great open source is about communication and coordination among a diverse community of contributors. The level of inclusivity, openness, and transparency within communities impacts how people feel about their participation, resulting in more productive teams.

These and other dimensions of what contributes to productivity on open source teams can be understood and measured with the SPACE framework, which was developed based on learnings from the proprietary world and research conducted by GitHub, the University of Victoria in Canada, and Microsoft. I believe that the SPACE framework has the potential to provide a balanced view of what is happening in open source projects, which would help to drive and optimize collaboration and participation among project team members.

More open source career advice Open source cheat sheets Linux starter kit for developers 7 questions sysadmins should ask a potential employer before taking a job Resources for IT artchitects Cheat sheet: IT job interviews A more accurate productivity framework

The SPACE framework acronym stands for:

  • Satisfaction and well-being
  • Performance
  • Activity
  • Communication and collaboration
  • Efficiency and flow

Satisfaction and well-being refer to how fulfilled developers feel with the team, their tools, and the environment, as well as how healthy and happy they are. Happiness is somewhat underrated as a factor in the success of teams. Still, there is strong evidence of a direct correlation between the way people feel and their productivity. In the open source industry, surveying contributors, committers, and maintainers about their attitudes, preferences, and priorities about what is being done and how is essential to understanding attitudes and opinions.

Performance in this context is about evaluating productivity in terms of the outcomes of processes instead of output. Team-level examples are code-review velocity (which captures the speed of reviews) and story points shipped. More holistic measures focus on quality and reliability. For example, was the code written in a way that ensures it will reliably do what it is supposed to do? Are there a lot of bugs in the software? Is industry adoption of the software growing?

Open source activity focuses on measuring design and development and CI/CD metrics, like build, test, deployments, releases, and infrastructure utilization. Example metrics for open source projects are the number of pull requests, commits, code reviews completed, build releases, and project documents created.

Communication and collaboration capture how people and teams work together, communicate, and coordinate efforts with high transparency and awareness within and between teams. Metrics in this area focus on the vibrancy of forums, as measured by the number of posts, messages, questions asked and answered, and project meetings held.

Finally, efficiency and flow refer to the ability to complete work and progress towards it with minimal interruptions and delays. At the individual developer level, this is all about getting into a flow that allows complex tasks to be completed with minimal distractions, interruptions, or context switching. At the project team level, this is about optimizing flow to minimize the delays and handoffs that take place in the steps needed to take software from an idea or feature request to being written into code. Metrics are built around process delays, handoffs, time on task, and the ease of project contributions and integrations.

Applying the SPACE framework to open source teams

Here are some sample metrics to illustrate how the SPACE framework could be used for an open source project.

Satisfaction and well-being
  • Contributor satisfaction
  • Community sentiment
  • Community growth & diversity
Performance
  • Code review velocity
  • Story points shipped
  • Absence of bugs
  • Industry adoption
Activity
  • number of pull requests
  • number of commits
  • number of code reviews
  • number of builds
  • number of releases
  • number of docs created
Communication and collaboration
  • Forum posts
  • Messages
  • Questions asked & answered
  • Meetings
Efficiency and flow
  • Code review timing
  • Process delays & handoffs
  • Ease of contributions/integration
Tools for managing open source projects must be fit for purpose

There is an opportunity to leverage the tools and approaches startups and high-growth organizations use to understand and improve open source development efficiency. All while putting open source’s core tenets, like openness and transparency, into practice.

Tools used by open source teams should enable maintainers and contributors to be productive and successful, while allowing the projects to be open and welcoming to everyone, including developers who may work in multiple organizations and even competing companies. It is also critical to provide an excellent onboarding experience for new contributors and accelerate their time-to-understanding and time-to-contribution.

Tools for managing open source projects should transparently manage data and accurately reflect project progress based on where the collaboration happens: in the codebase and repositories. Open source teams should be able to see real-time updates based on updates to issues and pull requests. And, project leads and maintainers should have the flexibility to decide whether access to the project should be completely public or if it should be limited to trusted individuals for issues or information of a more sensitive nature.

Ideally, tools should allow self-governed project teams to streamline coordination, processes, and workflows and eliminate repetitive tasks through automation. This reduces human friction and empowers maintainers and contributors to focus on what really matters: contributing to the ecosystem or community and delivering releases faster and more reliably.

The tools teams use should also support collaboration from people wherever they are. Since open source teams work in a remote and asynchronous world, tools should be able to integrate everyone’s contributions wherever and whenever they occur. These efforts should be enabled by great documentation stored in a central and easily accessible place. And finally, the tools should enable continuous improvement based on the types of frameworks and measures of productivity outlined above.

Features that allow for increased transparency are especially important for open source projects. Tools should help keep community members aligned and working towards a common goal with a project roadmap that shows work is in flight, progress updates, and predicted end dates.

Conclusion

Open source projects are a benefit to us all, and as such, it benefits everyone to make the processes that exist within these projects as productive as possible.

By leveraging concepts like the SPACE framework and modern tools, we can ditch the spreadsheets and manual ways of tracking, measuring, and improving productivity. We can adapt approaches that power software development in the proprietary world and leverage modern tools that can help increase the quality, reliability, and predictability of open source software releases. Open source is far too important to leave to anything less.

Enhance productivity by applying the SPACE framework to open source teams.

Image by:

opensource.com

Careers Community management What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How I create music playlists on Linux

opensource.com - Wed, 07/13/2022 - 15:00
How I create music playlists on Linux Rikard Grossma… Wed, 07/13/2022 - 03:00 2 readers like this 2 readers like this

I recently wrote a C program in Linux to create a smaller random selection of MP3 files from my extensive MP3 library. The program goes through a directory containing my MP3 library, and then creates a directory with a random, smaller selection of songs. I then copy the MP3 files to my smartphone to listen to them on the go.

Sweden is a sparsely populated country with many rural areas where you don't have full cell phone coverage. That's one reason for having MP3 files on a smartphone. Another reason is that I don't always have the money for a streaming service, so I like to have my own copies of the songs I enjoy.

You can download my application from its Git repository. I wrote it for Linux specifically in part because it's easy to find well-tested file I/O routines on Linux. Many years ago, I tried writing the same program on Windows using proprietary C libraries, and I got stuck trying to get the file copying routing to work. Linux gives the user easy and direct access to the file system.

In the spirit of open source, it didn't take much searching before I found file I/O code for Linux to inspire me. I also found some code for allocating memory which inspired me. I wrote the code for random number generation.

The program works as described here:

  1. It asks for the source and destination directory.
  2. It asks for the number of files in the directory of MP3 files.
  3. It searches for the percentage (from 1.0 to 88.0 percent) of your collection that you wish to copy. You can also enter a number like 12.5%, if you have a collection of 1000 files and wish to copy 125 files from your collection rather than 120 files. I put the cap at 88% because copying more than 88% of your library would mostly generate a collection similar to your base collection. Of course, the code is open source so you can freely modify it to your liking.
  4. It allocates memory using pointers and malloc. Memory is required for several actions, including the list of strings representing the files in your music collection. There is also a list to hold the randomly generated numbers.
  5. It generates a list of random numbers in the range of all the files (for example, 1 to 1000, if the collection has 1000 files).
  6. It copies the files.

Some of these parts are simpler than others, but the code is only about 100 lines:

#include #include #include #include /* include necessary header files */ #include #include #include #include #define BUF_SIZE 4096 /* use buffer of 4096 bytes */ #define OUTPUT_MODE 0700 /*protect output file */ #define MAX_STR_LEN 256 int main(void) { DIR *d; struct dirent *dir; char strTemp[256], srcFile[256], dstFile[256], srcDir[256], dstDir[256]; char **ptrFileLst; char buffer[BUF_SIZE]; int nrOfStrs=-1, srcFileDesc, dstFileDesc, readByteCount, writeByteCount, numFiles; int indPtrFileAcc, q; float nrFilesCopy; //vars for generatingRandNumList int i, k, curRanNum, curLstInd, numFound, numsToGen, largNumRange; int *numLst; float procFilesCopy; printf("Enter name of source Directory\n"); scanf("%s", srcDir); printf("Enter name of destionation Directory\n"); scanf("%s", dstDir); printf("How many files does the directory with mp3 files contain?\n"); scanf("%d", &numFiles); printf("What percent of the files do you wish to make a random selection of\n"); printf("enter a number between 1 and 88\n"); scanf("%f", &procFilesCopy); //allocate memory for filesList, list of random numbers ptrFileLst= (char**) malloc(numFiles * sizeof(char*)); for (i=0; id_name); if(strTemp[0]!='.'){ nrOfStrs++; strcpy(ptrFileLst[nrOfStrs], strTemp); } } closedir(d); } for(q=0; q<=curLstInd; q++){ indPtrFileAcc=numLst[q]; strcpy(srcFile,srcDir); strcat(srcFile, "/"); strcat(srcFile,ptrFileLst[indPtrFileAcc]); strcpy(dstFile,dstDir); strcat(dstFile, "/"); strcat(dstFile,ptrFileLst[indPtrFileAcc]); srcFileDesc = open(srcFile, O_RDONLY); dstFileDesc = creat(dstFile, OUTPUT_MODE); while(1){ readByteCount = read(srcFileDesc, buffer, BUF_SIZE); if(readByteCount<=0) break; writeByteCount = write(dstFileDesc, buffer, readByteCount); if(writeByteCount<=0) exit(4); } //close the files close(srcFileDesc); close(dstFileDesc); } }

This code is possibly the most complex:

while(1){ readByteCount = read(srcFileDesc, buffer, BUF_SIZE); if(readByteCount<=0) break; writeByteCount = write(dstFileDesc, buffer, readByteCount); if(writeByteCount<=0) exit(4); }

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles

This reads a number of bytes (readByteCount) from a file specified into the character buffer. The first parameter to the function is the file name (srcFileDesc). The second parameter is a pointer to the character buffer, declared previously in the program. The last parameter of the function is the size of the buffer.

The program returns the number of the bytes read (in this case, 4 bytes). The first if clause breaks out of the loop if a number of 0 or less is returned.

If the number of read bytes is 0, then all of the writing is done, and the loop breaks to write the next file. If the number of bytes read is less than 0, then an error has occurred and the program exits.

When the 4 bytes are read, it will write to them.The write function takes three arguments.The first is the file to write to, the second is the character buffer, and the third is the number of bytes to write (4 bytes). The function returns the number of bytes written.

If 0 bytes are written, then a write error has occurred, so the second if clause exits the program.

The while loop reads and copies the file, 4 bytes at a time, until the file is copied. When the copying is done, you can copy the directory of randomly generated mp3 files to your smartphone.

The copy and write routine are fairly efficient because they use file system calls in Linux.

Improving the code

This program is simple and it could be improved in terms of its user interface, and how flexible it is. You can implement a function that calculates the number of files in the source directory so you don't have to enter it manually, for instance. You can add options so you can pass the percentage and path non-interactively.nBut the code does what I need it to do, and it's a demonstration of the simple efficiency of the C programming language.

Use this C program I made on Linux to listen to your favorite songs on the go.

Image by:

Opensource.com

Programming Audio and music What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Pages