This article appeared in the April 2022 issue of Resource Recycling. Subscribe today for access to all print content.

 

Recycling research is vital to making data-informed packaging and sustainability decisions. Different approaches to methodology and measurement in recycling research have, however, led to some disagreement within the sustainability field.

For example, there has been considerable discussion about whether weight is the best way to measure a recycling initiative. Since glass and fiber are very heavy, using weight as the primary measurable can skew the data when comparing collection of those items to lighter materials. Judging success solely in terms of weight collected can also fail to take into account the more holistic environmental benefits of source reduction.

On the flip side, it has also been proposed that volume of recyclables be the preferred measurement. However, volume can also skew reports, likely in favor of less compact-able materials. Likewise, life cycle assessments (LCAs), both in recycling specifically and in the area of sustainable packaging in general, have been controversial. Opponents critique how the boundaries of an LCA’s scope can drastically change the results, but supporters of LCAs believe that despite the method’s shortcomings, it is still currently the best way to compare environmental impacts of different products and processes.

In July of 2021, the Sustainable Packaging Coalition published the 2020/2021 edition of the “Centralized Study on the Availability of Recycling,” an update to the version from 2015/2016. That research looks into the availability of recycling programs, and developing the data – and also reading other recent studies that have been released – led us to reflect on how important methodology is when conducting research in an area as complex and substantial as the quantity and quality of recycling programs nationwide.

To help move the industry conversation forward, we’ve developed a list of key concepts to keep front-of-mind when evaluating existing recycling research and conducting sector studies.

Research evaluation

We’ve split our research best practices into two areas – research evaluation and then the steps to actually conduct research.

On the evaluation end of things, we recommend four key considerations.

Evaluate the match between the researcher(s) and the research questions.

Because recycling is an interdisciplinary field, there are many backgrounds of subject matter expertise that prepare a researcher to conduct recycling studies. However, the interdisciplinary nature of recycling also means that different subject matter expertise can be required for different studies. A researcher’s toolkit of methods to make an assessment is informed by their background knowledge and training. Each research question being asked should be addressed with appropriate tools.

A well-researched report or article will come from a team of researchers with the appropriate skills for the question that is being asked. For example, suppose a team wanted to use data to design a labeling system to support consumers’ ability to sort waste into a mechanical recycling stream versus a chemical recycling stream.

There are two components requiring expertise: The first is in consumer research and the second is in recycling technology. Let’s say Researcher 1 is a subject matter expert on the processing required for collected materials to become post-consumer recycled content. And Researcher 2 has expertise in methods of understanding consumer behavior with product-package systems. Both would bring relevant knowledge and methodological tools to address the question, but neither Researcher 1 nor 2 is alone equipped to do both components of the project.

In this example, the varied expertise of the researchers enables validity in how each component of the research question is addressed, either through collaboration or a series of studies that separately address both parts.

Watch for rhetorical manipulation.

Subtle choices in the presentation of a report can be used to catastrophize beyond the scope of the study. Simple word choice, for example, can have a big impact. Using the term “dumping recycling” rather than “trading recycling” can change the framing from neutral to positive or negative.

This is important because the definition of waste is not universal; what one might classify as waste, another might see as a valuable resource. Employing a universal definition for a subjective concept privileges one’s perspective over another’s.

Citation strategies can also be manipulated to artificially increase apparent credibility. For example, a study might cite the same source multiple times to make it appear as if there is more consensus or substantiation. Visual representations like tables and figures can also overly influence reader perceptions – units of measurement or significant figures might be manipulated, a baseline could be omitted, or data sets may be improperly combined. Limitations of the study should be acknowledged to not subtly overstate the credibility via lack of appropriate qualification.

Examine the distance between the data and the claim.

Using primary sources and original data enables different insight than secondary sources describing existing knowledge or data. While both can be useful in developing understanding, the distance between the original data and the claim that is being made should be as small as possible.

Citing newspaper articles about a change in recycling policy is less credible than citing the original policy. However, if one is claiming a trend in how recycling policy changes are being framed by the media, citing many newspaper articles would be credible support for the claim.

Also, watch for interpretive leaps beyond the scope of the data and discussion being framed as a result or an absolute. Each claim backed by data or evidence should have an appropriate warrant. It should be clear how the sources cited as evidence are related to the claim and the assumption linking the evidence to the claim should be clearly understood by the reader.

Look at the comparisons: Are they really apples to apples?

A reader of a study can only identify if a difference is significant if there is a way to assess how close the compared values are.

If two averages are being compared, make sure there is both a point estimate (the average) and a measure of spread (a standard error or confidence interval). If percentages are being compared, look at the scale in terms of original units and prevalence. A 100% increase could be a change from 1 to 2, or a change from 500 to 1000. Smaller differences are not inherently less important, as the context of the scale of the change matters as well. What is important is that the reader has enough information to assess with context.

Additionally, methodology can inform the appropriateness of a comparison. Watch for manipulations to existing data using subjective methods such as adjustments to data due to the expected outcome of a change in practice or policy rather than remeasurement of the data in question.

Likewise, ask yourself if the best form of data is being utilized to ensure a fair comparison. Oftentimes, per-capita data makes sense for comparison, but sometimes aggregate totals are more appropriate. Comprehensive reporting of results that are being compared allows the findings to be assessed with the necessary context. Inappropriate comparison can lead to overbroad interpretations of the results that are either inaccurate or lacking in the appropriate nuance.

Conducting research

The recommendations above are geared toward evaluation of existing studies. Below are our thoughts on most effectively implementing new recycling research.

Choose your research team wisely.

Recycling is an interdisciplinary field with a need for researchers with varied expertise. Partners in data collection, analysis, and reviewers should all be qualified to assess the types of research questions being posed in the project.

Different types of research questions necessitate different results; for instance, assessing consumer behavior requires a different set of methodologies than conducting a bale audit or examining the feasibility of a novel recycling process.

Our partnership on past projects with Resource Recycling Systems (RRS) is an example of partnering with methodology specialists. Their experience gathering data on population-based access to recycling, along with the development of transparent coding rubrics for qualitative data, enabled a thorough and accurate update to our report on the availability of U.S. recycling.

Avoid rhetorical manipulation in the presentation of results.

The researcher has the responsibility to not cherry pick results. When making figures, avoid manipulating perception by selectively changing the X or Y axis. When discussing percentages, disclose any rounding that might make the total greater or less than 100%. Beyond the presentation of evidence, the word choice also matters. Choosing to say “dumping garbage” rather than “trading recyclables” implies that the western perspective on what is or is not valuable is universal rather than subjective.

Because people, with biases and perspectives, present the findings of a study, the manner in which results or claims are presented is not inherently neutral. Nor should the presentations of results always be neutral – after all, the results may indicate a need for a change. However, the aim for the presentation of the findings should always be to be truthful.

Use reproducible methods.

Reproducible means another research team would be able to follow the same steps and arrive at the same conclusions. Whether or not something is reproducible includes both accuracy in reporting the quantitative methodology in the study and transparency in qualitative coding schemes. While data can be proprietary, the description of how it was sourced and inclusion of error terms is important in being transparent to readers and other research teams interested in reproducing the findings or comparing the results to other studies.

When setting out to do a research project, reproducibility also includes a search for established methodology to address the topic before designing your own approach. For example, a smart researcher will search for test standards on how to conduct a bale audit instead of developing a method that was not vetted or validated.

One can only evaluate whether or not the evidence matches the claim if there is transparency in the origin of what is being presented as evidence. For example, claims about the relative environmental benefit of one product over another should disclose if that result is from an LCA study or if another manner of assessment was utilized instead.

Moving targets

Fully capturing data from a meaningful cross section of recycling programs in the United States is a challenge. Figuring out the best methodology to do so is even more difficult.

The recycling and packaging industries are dynamic, innovative and market-based, which means the status quo changes frequently. Research studies like the “2020/2021 Centralized Study on the Availability of Recycling” are a snapshot of the time and place at which the research is conducted. It must be understood that the dynamics of global, market-based industries can fluctuate and regularly need re-assessments to understand current trends and complexities.

Therefore, keeping these key concepts in mind when evaluating and conducting recycling research at any point in time is crucial to deepening our critical thinking of the industry as a whole.

 

Alyssa Harben, Ph.D, is a project manager at the Sustainable Packaging Coalition and can be contacted at [email protected].

Lucy Pierce is also a project manager at the Sustainable Packaging Coalition and can be contacted at [email protected].

This article appeared in the April 2022 issue of Resource Recycling. Subscribe today for access to all print content.