Open-source News

Last Call On This Year's Premium Holiday Deal

Phoronix - Wed, 11/30/2022 - 18:30
Here's a last call that if you wanted to participate in this year's Black Friday / Cyber Monday holiday deal to help support Phoronix, today is the last day of the sale...

Google Chrome 108 Released As Last Major Version For 2022

Phoronix - Wed, 11/30/2022 - 18:10
Google released Chrome 108 on Tuesday that is the last major feature update for 2022 with this cross-platform web browser...

DXVK-NVAPI 0.6 Released With DLSS Fixes, Other NVIDIA Enhancements

Phoronix - Wed, 11/30/2022 - 16:00
DXVK-NVAPI 0.6 is now available as the newest feature release to this open-source project that bridges the DXVK Direct3D-on-Vulkan layer with NVIDIA's proprietary driver NVAPI library for utilizing various NVIDIA-specific features...

Is sustainability still a thing in open source?

opensource.com - Wed, 11/30/2022 - 16:00
Is sustainability still a thing in open source? Daniel Curto-Millet Wed, 11/30/2022 - 03:00

Eugen Rochko just reported that the social service Mastodon had "hit 1,028,362 monthly active users […] 1,124 new Mastodon servers since Oct 27 and 489,003 new users". Known mainly by open source developers, Mastodon has suddenly become mainstream, promising to take inclusive, open, and free values to social media. If Mastodon was sustainable, it is now thriving, attracting both users and developers, and able to launch more than 1,000 servers in a matter of days. How can this increase in users and infrastructure be explained? In this article, we want to suggest that there are different kinds of sustainability in open source and that these can have interesting interactions.

We know intuitively open source is sustainable: It's been around for decades, its importance keeps growing in the digital economy, and public institutions actively try to participate in open source ventures. Still, open source sustainability remains a key concern for both practitioners and academics. On the practice side, recent projects like Bitergia and Augur are identifying holistic measures of sustainability and project health. An entire group meets to discuss open source sustainability. For academics, a recent call for papers from IEEE asks for contributions on how public bodies can improve the long-term sustainability of open source projects.

3 types of sustainability

A recent article argues that there are three kinds of sustainability: Resource-based, interactional, and infrastructural. Resource-based sustainability has historically attracted the most concern. This sustainability refers to the capacity of open source actors to attract resources such as developers or value such as knowledge. A large part of the world used to be bewildered by the ability to attract contributors. One question asked by economists back in the day was, "Why should thousands of top-notch programmers contribute freely to the provision of a public good? Any explanation based on altruism only goes so far" (Lerner and Tirole, 2000, p. 2). Although the sustainability of open source no longer hangs on its ability to attract people, large numbers of contributors are still needed. Those projects that no longer attract them risk maintenance and security issues.

Interactional sustainability concerns what kinds of relations are created and sustained in open source. What value do certain interactions bring? How do developers collaborate to make the best software possible? For example, the historical debates between Free software and open source deal with what kind of interactions should be favored in open source, with whom, and for what purpose. The existence of mentorship programs, such as Red Hat's, shows the importance of passing down 'open' and reciprocal values. More recently, the introduction of codes of conduct has also aimed at better defining desired values of interaction, ones that are inclusive.

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources

The final kind of sustainability is about the infrastructures needed for any work to be carried out. A lot of open source development depends on infrastructures. Without GitLab, GitHub, or even Git, open source development would be much costlier and more complicated to set up. These kinds of projects act as platforms that enable many open source projects to develop and collaborate. They also create an underlying basis for how open source projects coordinate. Developers can more rapidly get to speed and start contributing to other projects when they share versioning systems.

Sustainability strategies in open source

When thinking of sustainability as being of multiple kinds, start analyzing how they interact. A recent Opensource.com article showcases one such example of sustainability relations. The article notes that it may seem paradoxical that firms must get involved in projects they don't control and collaborate with competitors. A firm's sustainability depends on its ability to foster a sustainable community. This is a hierarchical relationship: the sustainability of one leads to the sustainability of the other. But you may also question this hierarchical relation: do the different actors have different priorities or values of sustainability? Does the community have different sustainability goals than firms?

These questions lead you to think strategically about open source sustainability. Strategizing sustainability has a direct consequence: "achieving" sustainability in open source may not be possible. Actors take a calculative—though potentially mutually-beneficial—approach to sustainability. For example, an organization might want to quickly grow a project, while the community might prefer to focus on infrastructural sustainability. Further, sustainable priorities may evolve and become contested. Indeed, that there may be different kinds of sustainability means that you can also find yourself with contradictory sustainability objectives held by different actors (e.g., community, core developers, organizations, etc.). These objectives may independently be worthwhile but together, may incur trade-offs instead.

[ Related read 5 open source tips to reduce waste in web design ]

I'll look at one historic event in the Linux Kernel that shows how understanding sustainability was contingent on circumstances. In 2002, as the Linux Kernel was growing, Linus decided to change versioning systems to better accommodate increasing changes (infrastructural sustainability) since growing numbers of developers were contributing to the project. Having looked at the existing systems available, he finally chose BitKeeper because it fit best with development practices in Linux. That BitKeeper was a proprietary project did not bother Linus. Yet the decision to migrate to this versioning system annoyed many developers, with the risk that some might stop contributing (a decrease in resource-based sustainability). How could the open source flagship project rely on a proprietary system (a decrease in interactional sustainability)? Could the owners of the VC system cut access to it and take the community hostage (infrastructural sustainability risk)? Despite an email from the founder of BitKeeper promising not to do that, an attempt to reverse engineer BitKeeper (using developer resources and prioritizing interactional sustainability) ended the collaboration between Linux and BitKeeper.

The more recent example of Mastodon shows a synergistic growth scenario. The social platform could exponentially grow infrastructural sustainability by adding new servers and connecting new instances around topics, interests, or any kind of group. In contrast, its interactional sustainability was defined very well in clear codes of conduct and adapted per instance, thus being ready to increase its resource-based sustainability.

A sustainable future

These are brief examples, but they are enough to see that different kinds of sustainability influence each other, and not always positively. If there are multiple kinds of sustainability, then can you maximize all of them? Or should you maximize one as long as it does not negatively influence others? What type of sustainability should be prioritized? Or should you look for some sort of balance? These strategic questions are important for future, newly established, and mature open source projects since they face different sustainability struggles, and also for funding agencies that may want to evaluate how projects understand their sustainability priorities.

These examples show how different kinds of sustainability influence each other in open source.

Image by:

Photo by Roman Synkevych on Unsplash

Sustainability What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Get to know Lua for loops in 4 minutes

opensource.com - Wed, 11/30/2022 - 16:00
Get to know Lua for loops in 4 minutes Seth Kenlon Wed, 11/30/2022 - 03:00

In programming, iteration is an important concept because code often must scan over a set of data several times so that it can process each item individually. Control structures enable you to direct the flow of the program based on conditions that are often established dynamically as the program is running. Different languages provide different controls, and in Lua, there's the while loop, for loop, and repeat until loop. This article covers for loops. I will cover while and repeat until loops in a separate article.

For loop

A for loop takes a known quantity of items and ensures that each item is processed. An "item" can be a number. It can also be a table containing several entries or any Lua data type. The syntax and logic are a little flexible, but the syntax allows for these parameters, each of which essentially describes a counter:

  • Starting value of the counter
  • Stop value
  • The increment you want the counter to advance

For instance, suppose you have three items and want Lua to process each. Your counter could start at 3 and last until 1, at an increment of -1. That renders the count of 3, 2, 1.

mytable = { "zombie", "Halloween", "apocalypse" }

for count = 3, 1, -1 do
  print(count .. ": " .. mytable[count])
end

Run the code to ensure all three items are getting processed:

$ lua ./for.lua
3: apocalypse
2: Halloween
1: zombie

This code effectively processed the table in "reverse" because it was a countdown. You can count up, instead:

for count = 1, 3, 1 do
  print(mytable[count])
end

This example processes the table from lowest index to highest:

$ lua ./for.lua
1: zombie
2: Halloween
3: apocalypseIncrements

You can change the increment, too. For instance, maybe you want a zombie apocalypse without all the pomp and circumstance of Halloween:

mytable = { "zombie", "Halloween", "apocalypse" }

for count = 1, 3, 2 do
  print(mytable[count])
end

Run the code:

$ lua ./for.lua
zombie
apocalypse

The example printed 1 and 3 because the first count was 1, which was then incremented by 2 (for a total of 3).

Counter

Sometimes you don't know the number of times you need Lua to iterate over data. In this case, you can set your counter to a variable populated by some other process.

Also, the word count isn't a keyword. It's just what I'm using in my sample code for clarity. It's common for programmers to use something shorter, such as i or c.

var = os.time()

if var%2 == 0 then
  mytable = { var }
else
  mytable = { "foo", "bar", "baz" }
end

for c = 1, #mytable, 1 do
  print(mytable[c])
end

Programming and development Red Hat Developers Blog Programming cheat sheets Try for free: Red Hat Learning Subscription eBook: An introduction to programming with Bash Bash shell scripting cheat sheet eBook: Modernizing Enterprise Java An open source developer's guide to building applications

This code creates a variable containing the timestamp of when it was launched. If the timestamp is even (it has a modulo of 0 when divided by 2), then just the timestamp is placed into a table. If the timestamp is odd, it puts three strings into a table.

Now you can't be sure how many times your for loop needs to run. It's either once or thrice, but there's no way to be sure. The solution is to set the starting count to 1 and the final count to the length of the table (#mytable is the built-in shortcut to determine the length of a table).

It might take a few times of running the script to see both results, but eventually, you end up with something like this:

$ lua ./dynamic.lua
1665447960

$ lua ./dynamic.lua
foo
bar
bazFor loops with pairs and ipairs

If you've already read my article on table iteration, then you're already familiar with one of the most common for loops in Lua. This one uses the pairs or ipairs function to iterate over a table:

mytable = { "zombie", "Halloween", "apocalypse" }

for i,v in ipairs(mytable) do
  print(i .. ": " v)
end

The pairs and ipairs functions "unpack" the table and dump the values into the variables you provide. In this example, I use i for index and v for value, but the variables' names don't matter.

$ lua ./for.lua
1: zombie
2: Halloween
3: apocalypseFor loop

The for loop structure is common in programming and very common in Lua due to its frequent use of tables and the pairs function. Understanding the for loop structure and the options you have when controlling it means you can make clever decisions about how to process data in Lua.

Understanding the for loop structure and the options you have when controlling it means you can make clever decisions about how to process data in Lua.

Image by:

Opensource.com

Programming What to read next How to use loops in awk This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Build test scripts for your IoT platform

opensource.com - Wed, 11/30/2022 - 16:00
Build test scripts for your IoT platform Chongyuan Yin Wed, 11/30/2022 - 03:00

In my previous article, I introduced the open source test tool JMeter and used a simple HTTP test as an example to demonstrate its capabilities. This article shows you how to build test scripts for complex test scenarios.

The user interface displays a JMeter test script in the "tree" format. The saved test script (in the .jmx format) is XML. The JMeter script tree treats a test plan as the root node, and the test plan includes all test components. In the test plan, you can configure user-defined variables called by components throughout the entire test plan. Variables can also thread group behavior, library files used in the test, and so on. You can build rich test scenarios using various test components in the test plan.

Test components in JMeter generally have the following categories:

  • Thread group
  • Sampler
  • Logic controller
  • Listener
  • Configuration element
  • Assertion
  • Timer
  • Pre-processor
  • Post-processor
Thread groups

A thread group is the beginning point for all test plans (so all samplers and controllers must be placed under a thread group). A thread group can be regarded as a virtual user pool in which each thread is essentially a virtual user, and multiple virtual users perform the same batch of tasks simultaneously. Each thread is independent and doesn't affect the others. During the execution of one thread, the variable of the current thread doesn't affect the variable value of other threads.

Image by:

(Chongyuan Yin, CC BY-SA 4.0)

More on edge computing Understanding edge computing Why Linux is critical to edge computing eBook: Running Kubernetes on your Raspberry Pi Download now: The automated enterprise eBook eBook: A practical guide to home automation using open source tools eBook: 7 examples of automation on the edge What is edge machine learning? The latest on edge

In this interface, the thread group can be configured in various ways.

1. Action to be taken after a sampler error

The following configuration items control whether a test continues when an error is encountered:

  • Continue: Ignore errors and continue execution.
  • Start Next Thread Loop: Ignore the error, terminate the current loop of the thread, and execute the next loop.
  • Stop Thread: Stop executing the current thread without affecting the normal execution of other threads.
  • Stop Test: Stop the entire thread after executing threads have finished the current sampling.
  • Stop Test Now: The entire test execution stops immediately, even if it interrupts currently executing samplers.
2. Number of threads

This is the number of concurrent (virtual) users. Each thread runs the test plan completely independently without interfering with any others. The test uses multiple threads to simulate concurrent access to the server.

3. Ramp-up period

The Ramp-up time sets the time required to start all threads. For example, if the number of threads is set to 10 and the ramp-up time is set to 100 seconds, then JMeter uses 100 seconds to start and runs 10 threads (each thread begins 10 seconds after the previous thread was started).

If the ramp-up value is set small and the number of threads is set large, there's a lot of stress on the server at the beginning of the test.

4. Loop count

Sets the number of loops per thread in the thread group before ending.

5. Delay thread creation until needed

By default, all threads are created when the test starts. If this option is checked, threads are created when they are needed.

6. Specify thread lifetime

Control the execution time of thread groups. You can set the duration and startup delay (in seconds).

Samplers

A sampler simulates user operations. It's a running unit that sends requests to the server and receives response data from the server. A sampler is a component inside a thread group, so it must be added to the thread group. JMeter natively supports a variety of samplers, including a TCP Sampler, HTTP Request, FTP Request, JDBC Request, Java Request, and so on. Each type of sampler sends different requests to the server according to the set parameters.

TCP Sampler

The TCP Sampler connects to the specified server over TCP/IP, sends a message to the server after the connection is successful, and then waits for the server to reply.

Image by:

(Chongyuan Yin, CC BY-SA 4.0)

The properties that can be set in the TCP Sampler are as follows:

TCPClient classname

This represents the implementation class that handles the request. By default, org.apache.jmeter.protocol.tcp.sampler.TCPClientImpl is used, and plain text is used for transmission. In addition, JMeter also has built-in support for BinaryTCPClientImpl and LengthPrefixedBinaryTCPClientImpl. The former uses hexadecimal packets, and the latter adds a 2-byte length prefix to BinaryTCPClientImpl.

You can also provide custom implementation classes by extending org.apache.jmeter.protocol.tcp.sampler.TCPClient.

  • Target server settings: Server Name or IP and Port Number specify the hostname or IP address and port number of the server application.
  • Connection Options: Determines how you connect to the server.
    • Re-use connection: If enabled, this connection is always open; otherwise, it's closed after reading data.
    • Close Connection: If enabled, this connection is closed after the TCP sampler has finished running.
    • Set No-Delay: If enabled, the Nagle algorithm is disabled, and the sending of small packets is allowed.
    • SO_LINGER: Controls whether to wait for data in the buffer to complete transmission before closing the connection.
    • End of line (EOL) byte value: Determines the byte value at the end of the line. The EOL check is skipped if the specified value is greater than 127 or less than -128. For example, if a string returned by the server ends with a carriage return, you can set this option to 10.
  • Timeouts: Set the connect timeout and response timeout.
  • Text to send: Contains the payload you want to send.
  • Login configuration: Sets the username and password used for the connection.
HTTP Request Sampler

The HTTP Sampler sends HTTP and HTTPS requests to the web server.

Image by:

(Chongyuan Yin, CC BY-SA 4.0)

Here are the settings available:

  • Name and comments
  • Protocol: Set the protocol to send the request to the target server, which can be HTTP, HTTPS, or FILE. The default is HTTP.
  • Server name or IP address: The hostname or IP address of the target server to which the request is sent.
  • Port number: The port number that the web service listens on. The default port is 80 for HTTP and 443 for HTTPS.
  • Request method: The method for sending the request, commonly including GET, POST, DELETE, PUT, TRACE, HEAD, OPTIONS, and so on.
  • Path: The target URL (excluding server address and port) to request.
  • Content encoding: How to encode the request (applicable to POST, PUT, PATCH, and FILE).
  • Advanced request options: A few extra options, including:
    • Redirect Automatically: Redirection is not treated as a separate request and is not recorded by JMeter.
    • Follow Redirects: Each redirection is treated as a separate request and is recorded by JMeter.
    • Use KeepAlive: If enabled, Connection: keep-alive is added to the request header when JMeter communicates with the target server.
    • Use multipart/form-data for POST: If enabled, requests are sent using multipart/form-data or application/x-www-form-urlencoded.
  • Parameters: JMeter uses parameter key-value pairs to generate request parameters and send these request parameters in different ways depending on the request method. For example, for GET, DELETE requests, parameters are appended to the request URL.
  • Message body data: If you want to pass parameters in JSON format, you must configure the Content-Type as application/json in the request header.
  • File upload: Send a file in the request. The HTTP file upload behavior can be simulated in this way (usually).
Logic Controllers

The JMeter Logic Controller controls the execution logic of components. The JMeter website explains it like this: "Logic Controllers determine the order in which Samplers are processed."

The Logic Controller can control the execution order of the samplers. Therefore, the controller needs to be used together with the sampler. Except for the once-only controller, other logic controllers can be nested within each other.

Logic controllers in JMeter are mainly divided into two categories. They can control the logical execution order of nodes during the execution of the test plan (a loop or conditional controller), or they can act in response to specific throughput or transaction count.

Transaction Controller

Sometimes, you want to count the overall response time of a group of related requests. In this case, you need to use a Transaction Controller.

The Transaction Controller counts the sampler execution time of all child nodes under the controller. If multiple samplers are defined under the Transaction Controller, then the transaction is considered successful only when all samplers run successfully.

Add a transaction controller using the contextual menu:

Image by:

(Chongyuan Yin, CC BY-SA 4.0)

Generate parent sample: If enabled, the Transaction Controller is used as a parent sample for other samplers. Otherwise, the Transaction Controller is only used as an independent sample.

Image by:

(Chongyuan Yin, CC BY-SA 4.0)

For example, the unchecked Summary Report is as follows:

Image by:

(Chongyuan Yin, CC BY-SA 4.0)

If checked, the Summary Report is as follows:

Image by:

(Chongyuan Yin, CC BY-SA 4.0)

Include duration of timer: If enabled, include a timer (a delay is added before and after the sampler runs).

Once Only Controller

The Once Only Controller, as its name implies, is a controller that executes only once. The request under the controller is executed only once during the loop execution process under the thread group. For tests that require a login, you can consider putting the login request in a Once Only Controller because the login request only needs to be executed once to establish a session.

Image by:

(Chongyuan Yin, CC BY-SA 4.0)

If you set the loop count to 2 and check the result tree after running, you can see that the HTTP request 3 under the Once Only Controller is only executed once, and other requests are executed twice.

Image by:

(Chongyuan Yin, CC BY-SA 4.0)

Listeners

A listener is a series of components that process and visualize test result data. View Results Tree, Graph Results, and Aggregate Report are common listener components.

View Results Tree

This component displays the result, request content, response time, response code, and response content of each sampler in a tree structure. Viewing the information can assist in analyzing whether there is a problem. It provides various viewing formats and filtering methods and can also write the results to specified files for batch analysis and processing.

Image by:

(Chongyuan Yin, CC BY-SA 4.0)

Configuration element

Configuration element provides support for static data configuration. It can be defined at the test plan level, or at the thread group or sampler level, with different scopes for different levels. Configuration elements mainly include User Defined Variables, CSV Data Set Config, TCP Sampler Config, HTTP Cookie Manager, etc.

User-Defined Variables Image by:

(Chongyuan Yin, CC BY-SA 4.0)

By setting a series of variables, you cause a random selection of values to be used in the performance test. Variable names can be referenced within the scope, and variables can be referenced as ${variable name}.

In addition to the User Defined Variables component, variables can also be defined in other components, such as Test Plans and HTTP Requests:

Image by:

(Chongyuan Yin, CC BY-SA 4.0)

For example, a defined variable is referenced in an HTTP Request:

Image by:

(Chongyuan Yin, CC BY-SA 4.0)

Viewing the execution results, you can see that the value of the variable has been obtained:

Image by:

(Chongyuan Yin, CC BY-SA 4.0)

CSV Data Set Config

During a performance test, you may need parameterized input, such as the username and password, in the login operation. When the amount of concurrency is relatively large, the data generation at runtime causes a heavy burden on the CPU and memory. The CSV Data Set Config can be used as the source of parameters required in this scenario.

Image by:

(Chongyuan Yin, CC BY-SA 4.0)

The descriptions of some parameters in the CSV Data Set Config:

  • Variable name: Defines the parameter name in the CSV file, which the script can reference as ${variable name}.
  • Recycle on EOF: If set to True, this allows looping again from the beginning when reaching the end of the CSV file.
  • Stop thread on EOF: If set to True, this stops running after reading the last record in the CSV file.
  • Sharing mode: Sets the mode shared between threads and thread groups.
Assertions

The Assertion checks whether the request is returned as expected. Assertions are an important part of automated test scripts, so you should pay great attention to it.

JMeter commonly used assertions include Response Assertion, JSON Assertion, Size Assertion, Duration Assertion, Beanshell Assertion, etc. Below I introduce the frequently-used JSON Assertion.

JSON Assertion

This is used to assert the content of the response in JSON format. A JSON Assertion is added on an HTTP Sampler in this example, as shown in the following image:

Image by:

(Chongyuan Yin, CC BY-SA 4.0)

The root of the JSON path is always called $, which can be represented by two different styles: dot-notation (.) or bracket-notation ([]). For example; $.message[0].name or $['message'][0]['name'].

Here's an example of a request made to https://www.google.com/doodles/json/2022/11. The $[0].name value represents the 'name' part in the first array element in the response.

Image by:

(Chongyuan Yin, CC BY-SA 4.0)

The Additionally assert value specifies that the value of 'name' is to be verified, and the Expected value is expected to be '2022-world-cup-opening-day'.

Run the script and look at the results. You can see that the assertion has passed.

Image by:

(Chongyuan Yin, CC BY-SA 4.0)

Here are the possible conditions and how they're treated:

  • If a response result is not in JSON format, it's treated as a failure.
  • If the JSON path cannot find the element, it fails.
  • If the JSON path finds the element, but no conditions are set, it passes.
  • If the JSON path finds an element that does not meet the conditions, it fails.
  • If the JSON path finds the element that meets the conditions, it passes.
  • If the JSON path returns an array, it iterates to determine whether any elements meet the conditions. If yes, it passes. If not, it fails.

Go back to JSON Assertion and check the Invert assertion.

Image by:

(Chongyuan Yin, CC BY-SA 4.0)

Run the script, check the results, and you can see that the assertion failed:

Image by:

(Chongyuan Yin, CC BY-SA 4.0)

Timers

The pause time between requests in the performance test is called "thinking time." In the real world, the pause time can be spent on content search or reading, and the Timer simulates this pause.

All timers in the same scope are executed before the samplers.

If you want the timer to be applied to only one of the samplers, add the timer to the child node of the sampler.

JMeter timers mainly include Constant Timer, Uniform Random Timer, Precise Throughput Timer, Constant Throughput Timer, Gaussian Random Timer, JSR223 Timer, Poisson Random Timer, Synchronizing Timer, and BeanShell Timer.

Constant Timer

A Constant Timer means that the interval between each request is a fixed value.

Image by:

(Chongyuan Yin, CC BY-SA 4.0)

After configuring the thread delay to 100 and 1000, respectively, run the script:

Image by:

(Chongyuan Yin, CC BY-SA 4.0)

Check the data in the table, where #1 and #2 are the running results when the configuration is 100 milliseconds, and #4 and #5 are the running results when the configuration is 1000 milliseconds. You can see that the interval between #4 and #5 is significantly greater than that between #1 and #2:

Image by:

(Chongyuan Yin, CC BY-SA 4.0)

Constant Throughput Timer

The Constant Throughput Timer controls the execution of requests according to the specified throughput.

Image by:

(Chongyuan Yin, CC BY-SA 4.0)

Configure the target throughput as 120 (note that the unit is minutes), and then select All active threads in current thread group (shared) based on the calculated throughput:

Image by:

(Chongyuan Yin, CC BY-SA 4.0)

Run the script, check the results, and observe that the throughput is approximately 2/second (120/60).

Image by:

(Chongyuan Yin, CC BY-SA 4.0)

Pre-processors and post-processors

A pre-processor performs some operations before the sampler request. It's often used to modify parameters, set environment variables, or update variables.

Similarly, a post-processor performs some operations after the sampler request. Sometimes, the response data needs to be used in subsequent requests, and you need to process the response data. For example, if the jwt token in the response is obtained and used for authentication in subsequent requests, the post-processor is used.

Using JMeter

The above is the introduction to the main test components of JMeter, and now you can feel confident in starting your own tests. In another article, I will explain using the MQTT plugin in JMeter.

Use JMeter to build test scripts for complex test scenarios in your IoT environment.

Image by:

CC BY 3.0 US Mapbox Uncharted ERG

Edge computing Internet of Things (IoT) What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Best HTML & CSS Code Editors for Linux

Tecmint - Wed, 11/30/2022 - 12:32
The post Best HTML & CSS Code Editors for Linux first appeared on Tecmint: Linux Howtos, Tutorials & Guides .

Brief: In this tutorial, we look at the 8 best HTML and CSS Code editors for Linux developers. HTML & CSS editors enable developers to develop web applications faster and more efficiently. They provide

The post Best HTML & CSS Code Editors for Linux first appeared on Tecmint: Linux Howtos, Tutorials & Guides.

Customer success stories: How Red Hat OpenShift solved challenges for organizations in IT, entertainment and the public sector

Red Hat News - Wed, 11/30/2022 - 08:00
<p><span><span><span><span><span><span>In this month’s customer success highlights, learn how Colombia’s Superintendence of Industry and Commerce, Westech and Kaizen Gaming used </span></span></span></span></span></span><strong><a href="https://www.redhat.com/en/technologies/cloud-computing/openshift"><span><span><span>&

SDL Tries Again To Prefer Wayland Over X11

Phoronix - Wed, 11/30/2022 - 06:20
At the start of the year SDL attempted to prefer Wayland over X.Org/X11 thanks to the maturing Wayland support for this widely-used software/hardware abstraction layer by numerous cross-platform games. But that change was later reverted over ecosystem challenges around Wayland. Now as we approach the end of the year, SDL is again trying to prefer Wayland over X11...

Pages