Open-source News

Open-Source NVIDIA "Nouveau" Driver Sees Few Changes For Linux 5.20

Phoronix - Wed, 07/13/2022 - 17:19
There hasn't been much to report on lately for the reverse-engineered Nouveau driver providing open-source NVIDIA GPU driver support on Linux. Several recent Linux kernel series haven't even seen any Nouveau DRM/KMS driver pull requests with changes. For the upcoming Linux 5.20 cycle, a Nouveau set of changes were sent in today to DRM-Next but it's quite tiny...

Intel oneAPI GPU Rendering Appears Ready For Blender 3.3

Phoronix - Wed, 07/13/2022 - 17:00
Intel's effort to add oneAPI/SYCL support to Blender for GPU acceleration with forthcoming Arc Graphics hardware appears all buttoned up for the upcoming Blender 3.3 release...

FDC3 2.0 Drives Desktop Interoperability Across the Financial Services Ecosystem

The Linux Foundation - Wed, 07/13/2022 - 16:45
The Fintech Open Source Foundation builds on the success of FDC3, its most adopted open source project to date

New York, NY – July 13, 2022 – The Fintech Open Source Foundation (FINOS), the financial services umbrella of the Linux Foundation, announced today during its Open Source in Finance Forum (OSFF) London the launch of FDC3 2.0. FDC3 supports efficient, streamlined desktop interoperability between financial institutions with enhanced connectivity capabilities. 

The global FDC3 community is fast-growing and includes application vendors, container vendors, a large presence from sell-side firms and a growing participation from buy-side firms all collaborating together on advancing the standard. 

You can check out all the community activity here: http://fdc3.finos.org/community

The latest version of the standard delivers universal connectivity to the financial industry’s desktop applications with a significant evolution of all four parts of the Standard: the Desktop Agent API, the App Directory providing access to apps and the intent and context messages that they exchange. 

MAIN IMPROVEMENTS

  • FDC3 2.0 significantly streamlines the API for both app developers and desktop agent vendors alike, refining the contract between these two groups based on the last three years’ working with FDC3 1.x. 
  • Desktop agents now support two-way data-flow between apps (both single transactions and data feeds), working with specific instances of apps and providing metadata on the source of messages – through an API that has been refined through feedback from across the FDC3 community.
  • This updated version also redefines the concept of the “App Directory”, simplifying the API, greatly improving the App Record and the discoverability experience for users and making the App Directory fit-for-purpose for years to come (and the explosion of vendor interest FDC3 is currently experiencing).
  • Finally, FDC3 2.0 includes a host of new standard intents and context, which define and standardize message exchanges for a range of very common workflows, including interop with CRMs, Communication apps (emails, calls, chats), data visualization tools, research apps and OMS/EMS/IMS systems. This is one of the most exciting developments as it represents diverse parts of the financial services software industry working together through the standard.

MAIN USES

  • Help Manage Information Overload. Finance is an information-dense environment. Typically, traders will use several different displays so that they can keep track of multiple information sources at once. FDC3 helps with this by sharing the “context” between multiple applications, so that they collectively track the topic the user is focused on.
  • Work Faster. FDC3 standardizes a way to call actions and exchange data between applications (called “intents”). Applications can contribute intents to each other, extending each other’s functionality. Instead of the user copy-and-pasting bits of data from one application to another, FDC3 makes sure the intents have the data they need to seamlessly transition activity between applications.
    • Platform-Agnostic. As an open standard, FDC3 can be implemented on any platform and in any language. All that is required is a “desktop agent” that supports the FDC3 standard, which is responsible for coordinating application interactions. FDC3 is successfully running on Web and Native platforms in financial institutions around the world.
  • End the integration nightmare. By providing support for FDC3, vendors and financial organizations alike can avoid the bilateral or trilateral integration projects that plague desktop app roll-out, cause vendor lock-in and result in a slow pace of change on the Financial Services desktop.

“It is very rewarding to see the recent community growth around FDC3,” said Jane Gavronsky, CTO of FINOS. “More and more diverse participants in the financial services ecosystem recognize the key role a standard such as FDC3 plays for achieving a true open financial services ecosystem. We are really excited about FDC3 2.0 and the potential for creating concrete, business-driven use cases that it enables.” 

What this means for the community 

“The wide adoption of the FDC3 standard shows the relevance of the work being conducted by FINOS. At Symphony we are supporters and promoters of this standard. This latest version, FDC3 2.0, and its improvements demonstrate substantial progress in this work and its growing importance to the financial services industry,” said Brad Levy, Symphony CEO.

“The improvements to the App Directory and its ramifications for market participants and vendors are game-changing enough in themselves to demand attention from everyone: large sell-sides with large IT departments, slim asset managers who rely on vendor technology, and vendors themselves”, said Jim Bunting, Global Head of Partnerships, Cosaic.

“FDC3 2.0 delivers many useful additions for software vendors and financial institutions alike. Glue42 continues to offer full support for FDC3 in its products. For me, the continued growth of the FDC3 community is the most exciting development”, said Leslie Spiro, CEO, Tik42/Glue42. “For example recent contributions led by Symphony, SinglePoint and others have helped to extend the common data contexts to cover chat and contacts; this makes FDC3 even more relevant and strengthens our founding goal of interop ‘without requiring prior knowledge between apps”. 

“Citi is a big supporter of FDC3 as it has allowed us to simplify how we create streamlined intelligent internal workflows, and partner with strategic clients to improve their overall experience by integrating each other’s services. The new FDC3 standard opens up even more opportunities for innovation between Citi and our clients,” said Amit Rai, Technology Head of Markets Digital & Enterprise Portal Framework at Citi.

“FDC3 has allowed us to build interoperability within our internal application ecosystem in a way that will allow us to do the same with external applications as they start to incorporate these standards,” said Bhupesh Vora, European Head of Capital Markets Technology, Royal Bank of Canada. “The next evolution of FDC3 will ensure we continue to build richer context sharing capabilities with our internal applications and bring greater functionality to our strategic clients through integration with the financial application ecosystem for a more cohesive experience overall.”

“Interoperability allows the Trading team to take control of their workflows, allowing them to reduce the time it takes to get to market. In addition they are able to generate alpha by being able to quickly sort vast, multiple sources of data,” said Carl James, Global Head of Fixed Income Trading, Pictet Asset Management. 

As FINOS sees continued growth and contribution to the FDC3 standard, the implementation of FDC3 2.0 will allow more leading financial institutions to take advantage of enhanced desktop interoperability. The contribution of continued updates also represents the overall wider adoption of open source technology, as reported in last year’s 2021 State of Open Source in Financial Services annual survey. To get involved in this year’s survey, visit https://www.research.net/r/ZN7JCDR to share key insights into the ever-growing open source landscape in financial services. 

Skill up on FDC3 by taking the free Linux Foundation’s FDC3 training course, or contact us at https://www.finos.org/contact-us. Hear from Kris West, Principal Engineer at Cosaic and Lead Maintainer of FDC3 on the FINOS Open Source in Finance Podcast, where he discusses why it was important to change the FDC3 standard in order to keep up with the growing amount of use cases end users are contributing to the community.

About FINOS

FINOS (The Fintech Open Source Foundation) is a nonprofit whose mission is to foster adoption of open source, open standards and collaborative software development practices in financial services. It is the center for open source developers and the financial services industry to build new technology projects that have a lasting impact on business operations. As a regulatory compliant platform, the foundation enables developers from these competing organizations to collaborate on projects with a strong propensity for mutualization. It has enabled codebase contributions from both the buy- and sell-side firms and counts over 50 major financial institutions, fintechs and technology consultancies as part of its membership. FINOS is also part of the Linux Foundation, the largest shared technology organization in the world.

The post FDC3 2.0 Drives Desktop Interoperability Across the Financial Services Ecosystem appeared first on Linux Foundation.

A guide to productivity management in open source projects

opensource.com - Wed, 07/13/2022 - 15:00
A guide to productivity management in open source projects Thabang Mashologu Wed, 07/13/2022 - 03:00 2 readers like this 2 readers like this

Open source is one of the most important technology trends of our time. It’s the lifeblood of the digital economy and the preeminent way that software-based innovation happens today. In fact, it’s estimated that over 90% of software released today contains open source libraries.

There's no doubt the open source model is effective and impactful. But is there still room for improvement? When comparing the broader software industry’s processes to that of open source communities, one big gap stands out: productivity management.

By and large, open source project leads and maintainers have been slow to adopt modern productivity and project management practices and tools commonly embraced by startups and enterprises to drive the efficiency and predictability of software development processes. It’s time we examine how the application of these approaches and capabilities can improve the management of open source projects for the better.

Understanding productivity in open source software development

The open source model, at its heart, is community-driven. There is no single definition of success for different communities, so a one-size-fits-all approach to measuring success does not exist. And what we have traditionally thought of as productivity measures for software development, like commit velocity, the number of pull requests approved and merged, and even the lines of code delivered, only tell part of the story.

Open source projects are people-powered. We need to take a holistic and humanistic approach to measuring productivity that goes beyond traditional measures. I think this new approach should focus on the fact that great open source is about communication and coordination among a diverse community of contributors. The level of inclusivity, openness, and transparency within communities impacts how people feel about their participation, resulting in more productive teams.

These and other dimensions of what contributes to productivity on open source teams can be understood and measured with the SPACE framework, which was developed based on learnings from the proprietary world and research conducted by GitHub, the University of Victoria in Canada, and Microsoft. I believe that the SPACE framework has the potential to provide a balanced view of what is happening in open source projects, which would help to drive and optimize collaboration and participation among project team members.

More open source career advice Open source cheat sheets Linux starter kit for developers 7 questions sysadmins should ask a potential employer before taking a job Resources for IT artchitects Cheat sheet: IT job interviews A more accurate productivity framework

The SPACE framework acronym stands for:

  • Satisfaction and well-being
  • Performance
  • Activity
  • Communication and collaboration
  • Efficiency and flow

Satisfaction and well-being refer to how fulfilled developers feel with the team, their tools, and the environment, as well as how healthy and happy they are. Happiness is somewhat underrated as a factor in the success of teams. Still, there is strong evidence of a direct correlation between the way people feel and their productivity. In the open source industry, surveying contributors, committers, and maintainers about their attitudes, preferences, and priorities about what is being done and how is essential to understanding attitudes and opinions.

Performance in this context is about evaluating productivity in terms of the outcomes of processes instead of output. Team-level examples are code-review velocity (which captures the speed of reviews) and story points shipped. More holistic measures focus on quality and reliability. For example, was the code written in a way that ensures it will reliably do what it is supposed to do? Are there a lot of bugs in the software? Is industry adoption of the software growing?

Open source activity focuses on measuring design and development and CI/CD metrics, like build, test, deployments, releases, and infrastructure utilization. Example metrics for open source projects are the number of pull requests, commits, code reviews completed, build releases, and project documents created.

Communication and collaboration capture how people and teams work together, communicate, and coordinate efforts with high transparency and awareness within and between teams. Metrics in this area focus on the vibrancy of forums, as measured by the number of posts, messages, questions asked and answered, and project meetings held.

Finally, efficiency and flow refer to the ability to complete work and progress towards it with minimal interruptions and delays. At the individual developer level, this is all about getting into a flow that allows complex tasks to be completed with minimal distractions, interruptions, or context switching. At the project team level, this is about optimizing flow to minimize the delays and handoffs that take place in the steps needed to take software from an idea or feature request to being written into code. Metrics are built around process delays, handoffs, time on task, and the ease of project contributions and integrations.

Applying the SPACE framework to open source teams

Here are some sample metrics to illustrate how the SPACE framework could be used for an open source project.

Satisfaction and well-being
  • Contributor satisfaction
  • Community sentiment
  • Community growth & diversity
Performance
  • Code review velocity
  • Story points shipped
  • Absence of bugs
  • Industry adoption
Activity
  • number of pull requests
  • number of commits
  • number of code reviews
  • number of builds
  • number of releases
  • number of docs created
Communication and collaboration
  • Forum posts
  • Messages
  • Questions asked & answered
  • Meetings
Efficiency and flow
  • Code review timing
  • Process delays & handoffs
  • Ease of contributions/integration
Tools for managing open source projects must be fit for purpose

There is an opportunity to leverage the tools and approaches startups and high-growth organizations use to understand and improve open source development efficiency. All while putting open source’s core tenets, like openness and transparency, into practice.

Tools used by open source teams should enable maintainers and contributors to be productive and successful, while allowing the projects to be open and welcoming to everyone, including developers who may work in multiple organizations and even competing companies. It is also critical to provide an excellent onboarding experience for new contributors and accelerate their time-to-understanding and time-to-contribution.

Tools for managing open source projects should transparently manage data and accurately reflect project progress based on where the collaboration happens: in the codebase and repositories. Open source teams should be able to see real-time updates based on updates to issues and pull requests. And, project leads and maintainers should have the flexibility to decide whether access to the project should be completely public or if it should be limited to trusted individuals for issues or information of a more sensitive nature.

Ideally, tools should allow self-governed project teams to streamline coordination, processes, and workflows and eliminate repetitive tasks through automation. This reduces human friction and empowers maintainers and contributors to focus on what really matters: contributing to the ecosystem or community and delivering releases faster and more reliably.

The tools teams use should also support collaboration from people wherever they are. Since open source teams work in a remote and asynchronous world, tools should be able to integrate everyone’s contributions wherever and whenever they occur. These efforts should be enabled by great documentation stored in a central and easily accessible place. And finally, the tools should enable continuous improvement based on the types of frameworks and measures of productivity outlined above.

Features that allow for increased transparency are especially important for open source projects. Tools should help keep community members aligned and working towards a common goal with a project roadmap that shows work is in flight, progress updates, and predicted end dates.

Conclusion

Open source projects are a benefit to us all, and as such, it benefits everyone to make the processes that exist within these projects as productive as possible.

By leveraging concepts like the SPACE framework and modern tools, we can ditch the spreadsheets and manual ways of tracking, measuring, and improving productivity. We can adapt approaches that power software development in the proprietary world and leverage modern tools that can help increase the quality, reliability, and predictability of open source software releases. Open source is far too important to leave to anything less.

Enhance productivity by applying the SPACE framework to open source teams.

Image by:

opensource.com

Careers Community management What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How I create music playlists on Linux

opensource.com - Wed, 07/13/2022 - 15:00
How I create music playlists on Linux Rikard Grossma… Wed, 07/13/2022 - 03:00 2 readers like this 2 readers like this

I recently wrote a C program in Linux to create a smaller random selection of MP3 files from my extensive MP3 library. The program goes through a directory containing my MP3 library, and then creates a directory with a random, smaller selection of songs. I then copy the MP3 files to my smartphone to listen to them on the go.

Sweden is a sparsely populated country with many rural areas where you don't have full cell phone coverage. That's one reason for having MP3 files on a smartphone. Another reason is that I don't always have the money for a streaming service, so I like to have my own copies of the songs I enjoy.

You can download my application from its Git repository. I wrote it for Linux specifically in part because it's easy to find well-tested file I/O routines on Linux. Many years ago, I tried writing the same program on Windows using proprietary C libraries, and I got stuck trying to get the file copying routing to work. Linux gives the user easy and direct access to the file system.

In the spirit of open source, it didn't take much searching before I found file I/O code for Linux to inspire me. I also found some code for allocating memory which inspired me. I wrote the code for random number generation.

The program works as described here:

  1. It asks for the source and destination directory.
  2. It asks for the number of files in the directory of MP3 files.
  3. It searches for the percentage (from 1.0 to 88.0 percent) of your collection that you wish to copy. You can also enter a number like 12.5%, if you have a collection of 1000 files and wish to copy 125 files from your collection rather than 120 files. I put the cap at 88% because copying more than 88% of your library would mostly generate a collection similar to your base collection. Of course, the code is open source so you can freely modify it to your liking.
  4. It allocates memory using pointers and malloc. Memory is required for several actions, including the list of strings representing the files in your music collection. There is also a list to hold the randomly generated numbers.
  5. It generates a list of random numbers in the range of all the files (for example, 1 to 1000, if the collection has 1000 files).
  6. It copies the files.

Some of these parts are simpler than others, but the code is only about 100 lines:

#include #include #include #include /* include necessary header files */ #include #include #include #include #define BUF_SIZE 4096 /* use buffer of 4096 bytes */ #define OUTPUT_MODE 0700 /*protect output file */ #define MAX_STR_LEN 256 int main(void) { DIR *d; struct dirent *dir; char strTemp[256], srcFile[256], dstFile[256], srcDir[256], dstDir[256]; char **ptrFileLst; char buffer[BUF_SIZE]; int nrOfStrs=-1, srcFileDesc, dstFileDesc, readByteCount, writeByteCount, numFiles; int indPtrFileAcc, q; float nrFilesCopy; //vars for generatingRandNumList int i, k, curRanNum, curLstInd, numFound, numsToGen, largNumRange; int *numLst; float procFilesCopy; printf("Enter name of source Directory\n"); scanf("%s", srcDir); printf("Enter name of destionation Directory\n"); scanf("%s", dstDir); printf("How many files does the directory with mp3 files contain?\n"); scanf("%d", &numFiles); printf("What percent of the files do you wish to make a random selection of\n"); printf("enter a number between 1 and 88\n"); scanf("%f", &procFilesCopy); //allocate memory for filesList, list of random numbers ptrFileLst= (char**) malloc(numFiles * sizeof(char*)); for (i=0; id_name); if(strTemp[0]!='.'){ nrOfStrs++; strcpy(ptrFileLst[nrOfStrs], strTemp); } } closedir(d); } for(q=0; q<=curLstInd; q++){ indPtrFileAcc=numLst[q]; strcpy(srcFile,srcDir); strcat(srcFile, "/"); strcat(srcFile,ptrFileLst[indPtrFileAcc]); strcpy(dstFile,dstDir); strcat(dstFile, "/"); strcat(dstFile,ptrFileLst[indPtrFileAcc]); srcFileDesc = open(srcFile, O_RDONLY); dstFileDesc = creat(dstFile, OUTPUT_MODE); while(1){ readByteCount = read(srcFileDesc, buffer, BUF_SIZE); if(readByteCount<=0) break; writeByteCount = write(dstFileDesc, buffer, readByteCount); if(writeByteCount<=0) exit(4); } //close the files close(srcFileDesc); close(dstFileDesc); } }

This code is possibly the most complex:

while(1){ readByteCount = read(srcFileDesc, buffer, BUF_SIZE); if(readByteCount<=0) break; writeByteCount = write(dstFileDesc, buffer, readByteCount); if(writeByteCount<=0) exit(4); }

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles

This reads a number of bytes (readByteCount) from a file specified into the character buffer. The first parameter to the function is the file name (srcFileDesc). The second parameter is a pointer to the character buffer, declared previously in the program. The last parameter of the function is the size of the buffer.

The program returns the number of the bytes read (in this case, 4 bytes). The first if clause breaks out of the loop if a number of 0 or less is returned.

If the number of read bytes is 0, then all of the writing is done, and the loop breaks to write the next file. If the number of bytes read is less than 0, then an error has occurred and the program exits.

When the 4 bytes are read, it will write to them.The write function takes three arguments.The first is the file to write to, the second is the character buffer, and the third is the number of bytes to write (4 bytes). The function returns the number of bytes written.

If 0 bytes are written, then a write error has occurred, so the second if clause exits the program.

The while loop reads and copies the file, 4 bytes at a time, until the file is copied. When the copying is done, you can copy the directory of randomly generated mp3 files to your smartphone.

The copy and write routine are fairly efficient because they use file system calls in Linux.

Improving the code

This program is simple and it could be improved in terms of its user interface, and how flexible it is. You can implement a function that calculates the number of files in the source directory so you don't have to enter it manually, for instance. You can add options so you can pass the percentage and path non-interactively.nBut the code does what I need it to do, and it's a demonstration of the simple efficiency of the C programming language.

Use this C program I made on Linux to listen to your favorite songs on the go.

Image by:

Opensource.com

Programming Audio and music What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

The RADV Driver Developer Experience Working With AMD's Next-Gen Geometry "NGG"

Phoronix - Wed, 07/13/2022 - 15:00
Mesa's Radeon Vulkan "RADV" driver contributor Timur Kristóf known for being one of the Valve contractors to improve the open-source Linux graphics stack has shared his experiences working on the Next-Gen Geometry (NGG) support for AMD RDNA GPUs with this open-source driver...

Linux Lite – An Ubuntu-Based Distribution for Linux Newbies

Tecmint - Wed, 07/13/2022 - 12:32
The post Linux Lite – An Ubuntu-Based Distribution for Linux Newbies first appeared on Tecmint: Linux Howtos, Tutorials & Guides .

Linux Lite is a free, easy-to-use, and open-source Linux distribution based on the Ubuntu LTS series of releases. By design, it is a lightweight and user-friendly distribution that was developed with Linux beginners in

The post Linux Lite – An Ubuntu-Based Distribution for Linux Newbies first appeared on Tecmint: Linux Howtos, Tutorials & Guides.

XWayland 22.1.3 Released Due To XKB Security Vulnerabilities

Phoronix - Wed, 07/13/2022 - 12:00
Disclosed on Tuesday were two new X.Org Server security vulnerabilities concerning possible local privilege escalation and remote code execution. X.Org Server 21.1.4 was released with these mitigations to the XKB extension while XWayland is also vulnerable and has now been patched with XWayland 22.1.3...

Enabling Open Source Projects with Impactful Engineering Experience

The Linux Foundation - Wed, 07/13/2022 - 05:17

This post originally appeared on the FINOS Community Blog. The author, James McLeod, is the Director of Community at the Fintech Open Source Foundation, a project of the Linux Foundation. You may also want to listen to the Open Source in Finance podcast

I often talk about “engineering experience” and the importance for open source projects to provide fast, easy and impactful ways for open source consumers to realise return on engagement. Just like e-commerce stores that invest in user experience to encourage repeat sales, successful open source projects provide a slick installation, well written contextual documentation and a very compelling engagement model that encourages collaboration.

In fact, within the open source community, it’s possible to drive commitment to open source projects through “engineering experience”. Successful projects develop lives of their own and build communities of thousands that flock to repos, Meetups and in-person events.

This article is focused on the “engineering experience” related to automation and deployment, but future articles will also cover providing an engaging README.md, contextual documentation and the workflows needed to engage new and experienced open source contributors.

ENGINEERING EXPERIENCE PROVIDES DAY ZERO OPEN SOURCE VALUE

The risk of ignoring an open source project’s “engineering experience” is the project becoming a lifeless repository waiting for a community to discover them. Imagine the questions that have been answered in dormant repos that could be solving real world problems if engagement was easy.

At FINOS we’re driven to provide day zero value to financial services engineers looking to utilise FINOS open source projects. This philosophy is demonstrated by FINOS projects like LegendWaltzPerspective and FDC3 that engage in open source methodologies for ease of installation.

Without engaging in a healthy “engineering experience”, engineer teams might find themselves working through reams of documentation, setting flags and system settings that could take days to configure and test against each and every operating system on their route to production.

The scenario highlighted above has been mitigated by FINOS projects Legend and Waltz by using Juju and Charms, an open source framework that enables easy installation and automated operations across hybrid cloud environments. Without Juju and Charms, Legend and Waltz would need to be manually installed and configured for every single project instance.

By engaging Juju and Charms, Legend and Waltz are shipped using a method that enables the projects to be installed across the software development lifecycle. This accelerator provides a positive “engineering experience” whilst increasing engineering velocity and saving development and infrastructure costs.

From the very first point of contact, open source projects should be smooth and simple to understand, install, deploy and leverage. The first set of people an open source project will meet on its journey to success is the humble developer looking for tools to accelerate projects.

Hybrid cloud and containerisation is a powerful example of how projects should be presented to engineers to vastly improve end-to-end engineering experience, another is the entire node.js and JavaScript ecosystem.

ENGINEERING EXPERIENCE ENABLES NODE.JS AND JAVASCRIPT OPEN SOURCE DEVELOPMENT

Take node.js and the various ways the node ecosystem can be maintained. I’m a massive fan of Node Version Manager, an open source project that enables the node community to install and traverse versions of node from a simple and easy to engage command line tool.

Node Version Manager removes the requirement to install, uninstall and reinstall different versions of node on your computer from downloaded binaries. Node Version Manager runs on your local computer and manages the version of node needed with simple bash commands.

After installing nvm with a simple curl of the latest install.sh, Node Version Manager is now running on your local computer, Mac in my case, and node can be installed with nvm install node. Such a simple way to keep the node.js community engaged, updated and supported. Not only this, but the vast open source world of JavaScript can now be leveraged.

With Node Version Manager provided as an open source tool, the further “engineering experience” of yarn and npm can be explored. Which enables FINOS projects, like Perspective and FDC3, to be installed using node.js to accelerate the financial services industry with simple commands like yarn add @finos/perspective and yarn add @finos/fdc3.

The chaining together of “engineering experience”, that removes the pain of manual configuration by leveraging containers and command line automation, not only invites experimentation, but it’s contributed greatly to the exponential success of open source itself.

As the articles move through the different ways to engage open source communities to make open source projects successful, it would be great to hear your “engineering experience” experiences by emailing james.mcleod@finos.org or by raising a GitHub issue on the FINOS Community Repo.

The post Enabling Open Source Projects with Impactful Engineering Experience appeared first on Linux Foundation.

Pages