Back to Top

Measuring up

Published: September 2, 2018
Updated:

by

This article originally appeared in the August 2018 issue of Resource Recycling. Subscribe today for access to all print content.

In February of this year, the U.S. EPA, the state of Tennessee, and the Southeast Recycling Development Council (SERDC) came together to host the Measurement Matters Summit in Chattanooga, Tenn. This event convened many representatives from local and state governments, national and international agencies, and trade organizations. These great minds worked on an age-old problem: how to measure our discarded materials.

The event spanned two-and-a-half days and focused on a wide variety of topics, including the State Measurement Program (SMP) online data-sharing tool and the pros and cons of U.S. EPA’s “Facts and Figures” report. Participants also dug deep into performance benchmarking, emerging tools in measurement, and goals and measures that do not fall into the traditional measuring of tonnage.

Though the event did not produce a single specific solution to solve measurement challenges, the discussions laid groundwork that can help stakeholders move forward. Perhaps most importantly, the gathering made clear the points of conflict among different states and programs that are holding up progress on establishing a clearer, more consistent recycling data set on a national scale.

This article will explain why measurement stagnation is occurring and will put forward a plan of action to create more consistency and clarity in the ways we count.

Identifying barriers

The issue at the heart of our inability to produce high-quality materials recovery data on a national scale was articulated succinctly at the summit by Will Sagar of the Southeast Recycling Development Council.

“Each state has its own system with its own terms, definitions and reporting requirements,” he said.

Clearly, if we don’t have a shared baseline in terms of how we’re collecting information and then presenting it, we’re not going to have much hope in wider consistency.

To address that consistency issue, we need to first discuss a couple of “straw man” factors that seem to continually pop up in measurement discussions. A straw man occurs when one side of a debate focuses on debunking an idea that is not actually the central component of the opposition’s argument – in essence, it’s the practice of over-amplifying a concern to the detriment of the wider discussion. In recycling measurement conversations, stakeholders often find themselves caught up on certain details that may be important but that do not need to completely stall progress.

The first of these is a determination among some states to eliminate double-counting problems. Clearly, we must develop systems in which tonnages are not tallied multiple times (thus skewing the data set and giving extra, unfair credit to certain stakeholders), but this issue should not be one that causes states to stop sharing data or work toward more national collaboration. First of all, it’s worth considering that safeguards already exist to limit the implications of double-counting. In many areas, local governments, states and EPA regions all vet the information they are reporting, meaning checks and balances are in place. Also, if there are instances where some material ends up getting tallied, say, in both Tennessee and Kentucky, we have to understand that does not necessarily doom a national measurement picture.

When we look at U.S. waste generation, we’re talking about totals in the area of 250 million tons. Even if double-counting caused an extra million tons to end up in our final sum, the inaccuracy would be in the range of 0.4 percent. That’s something we need to be able to move past so we can focus on larger issues.

Another straw man concern is related to the numerical specifics on how states aggregate their data.

Conversations get hung up around the question of “how accurate is really accurate?” When we’re talking in terms of millions of tons, we probably don’t need to worry about how many decimal places we have to include to ensure sufficient precision or harmonization. When one state issues a number like 5 million tons and another comes out with 7.56 million tons and they can’t agree to collaborate on data reporting because of those decimal differences – well, that just becomes maddening.

At a national level, concerns about double-counting and decimal-point-style differences often seem to be the product of egos on the part of decision-makers, who use rationale such as “my state is further along and has better processes so we are not going to participate.” At the same time, other officials believe measurement just isn’t important enough for them at this time. Regardless of their reasoning, these stakeholders, by not submitting their numbers, create a smaller sample size, resulting in less accurate data both in their state and at the national level. “This results in gaps in information as it is compared across state lines,” Sagar stated. “The ensuing difficulty makes planning for program improvement difficult or impossible.”

Sitting back and listening to the discussions at the event in Chattanooga in February, it was clear state leaders have great pride in their measurement procedures. This was represented in how we set goals and define materials. All well and good, but if a state isolates itself and its data, the industry as a whole suffers.

Three steps to move forward

To achieve a solution on bringing our numbers together, three important steps must occur. These recommended actions are straightforward, though they will require some shifts in procedure and will most definitely necessitate cooperation. But they can push us past barriers that have stymied policy wonks for over 35 years.

The first step might actually be the hardest to achieve. Everyone involved must simply drop the ego.

Finding measurement success is not a race that one state or company can win – the winner is either everyone or no one. If your state is advanced, bring your tools and statistics to the table in a constructive manner that establishes pathways to success. Be willing to take a step back (outside your borders) and share your data as well as your leadership and methodologies.

The next move is getting every state on board, with each participant presenting their data each year.

Regularly reporting from all states provides year-over-year comparisons and benchmarking that stakeholders can count on to meet their comparison needs. The good news is we’re really not far from having across-the-board participation. Already, we have 37 states and the District of Columbia participating in the SMP.

From there, we can move into step three: ensuring consistency within the data collection and reporting structure. The most impactful thing we can do in this realm is to focus on raw data. As an example, consider the plastics realm. Let’s assume a local government has 1,000 tons of natural HDPE No. 2 plastic. The rawest level of data classification would be calling it just that: Natural – HDPE – No. 2 Plastic. If all reporting communities can categorize material that way, the state will have a very clear data point that can then be shared with other states to hopefully lead to an accurate national total.

The reality, of course, is that there is a wide range of specificity when it comes to the reporting of our material. Some states or communities might simply be using the term HDPE or may not get any more specific that “mixed plastics.” These differences don’t need to derail our efforts, however. Instead, we can start by grouping together those states and regions that use similar terminology. After all, the data from four states that report “mixed plastics” is going to be more useful than the data from just one. Over time, as collaboration and systems-development continues, we can hope to move everyone toward the same terms and thus achieve robust national numbers based on consistent, raw numbers.

And technology, such as Green Halo or Emerge Knowledge’s ReTRAC Connect, can help us get there. As we better understand what different jurisdictions mean when they use different terms, we can leverage the increasingly sophisticated computing capabilities available to our industry to translate numbers into sets defined by common terms.

Also, as we consider consistency in material classification, we’d be wise to follow the market’s lead on terminology. If a certain product is called “plastic clamshell’ by materials recovery facilities, brokers and reclaimers, that’s the terminology state reporting systems should strive for as well. For one thing, a key goal for many state-level recycling coordinators is to use the materials recovery infrastructure to fuel regional market development. By actually using market terminology, we make it that much easier for businesses and others who may be interested in investing in a certain jurisdiction and want to know the details on material flows in the area.

In addition, the market terms for recyclable commodities tend to extend across the planet, since buyers and sellers often exist on different countries or continents. Aligning language in measurement initiatives with this established global business can help us better understand where we as states and communities fit into the worldwide picture, giving us better information to react to phenomena such as China’s National Sword.

In short, the commodities market for scrap materials has already named and valued the materials we’re all handling. Why do we think we have to reinvent the wheel on these terms?

Answers in accreditation

The three important shifts outlined above have been known to decision-makers for some time. So how can we move toward action now? A system of accreditation could be a big part of the answer.

One of the reasons states and regions have not been able come together and drive data consistency is the fact there was nothing acting as the glue that bonded all of these different reporting agencies together.

The Measurement Matters gathering created several days of cohesion; now, we must strive for unity over the long term.

One way to make this happen would be the establishment of a third-party institution that will promote collaboration and continuous input between states.

The primary goal of the organization would be to collect the raw data and then to establish methods to aggregate data in an effective manner that avoids double-counting and data integrity issues. It would follow the principles of Six Sigma, a disciplined approach (used, incidentally, by many recycling businesses) that strives to eliminate defects in a given process, whether that be manufacturing, data tabulation or anything else.

Local governments, states, and EPA regions that adhere to the group’s standards will earn accreditation.

The end game is to establish and identify publicly those actors that conform to an adopted basic standard, similar to the way colleges and universities are accredited by the U.S. Department of Education. Because of the rigorousness of the accreditation, participating stakeholders will be determined credible for national sharing of data on measurement.

That sequence will in turn lead us to a national benchmark to define processes, materials, reporting and a standard method to aggregate the raw reported data. Those participating will agree to comply with these accredited standards and provide consistency across the country.

The results of such a streamlined and consistent framework of measurement could be hugely impactful for recycling. Currently, our understanding of waste diversion on national scale comes from the EPA’s “Facts and Figures” report, which provides a “top down” statistical look. The data and reporting coming out of a group bringing states together through accredited number crunching would offer a vantage from the “bottom up.”

Together, these two vantage points will offer us a far more three-dimensional portrait of the state of the industry. Such an analysis will open the door to more informed decisions about market development, collection and processing opportunities, and much more.

No time to delay

The Measurement Matters Summit took the first step to breach current measurement barriers by surveying what data is available, identifying key players and beginning the push toward progress. This event should be repeated in a year or two, and it should continue until stakeholders have at least reached a collaborative agreement on a national measurement that all can adopt.

At the same time, work to develop measurement accreditation in a meaningful way cannot be delayed.

We all agree that measuring our programs is an essential step. By looking beyond differences in how we count and report and understanding the ways the industry as whole will benefit by establishing a robust and trustworthy pool of statistics, we can establish a revamped framework relatively quickly.

We all have numbers. But without sharing, those numbers are never fully utilized. In the long term, that fact will hurt communities and companies of all sizes.

Larry Christley is program manager in the Tennessee Department of Environment & Conservation’s Division of Solid Waste Management, Materials Management Program. He can be contacted at [email protected].

The views in this article are the author’s alone and should not be attributed to the planners of the Measurement Matters Summit, the hosts or any sponsor.

Subscribe today for weekly updates


  • In addition to our e-newsletters, Resource Recycling, Inc. will occasionally send emails related to the specific recycling industry sectors you've selected. As a subscriber, your information will not be sold. You may opt-out at any time with links at the bottom of our e-mails. You may review our privacy policy at any time.
  • This field is for validation purposes and should be left unchanged.