There was an unexpected error authorizing you. Please try again.

Q&A: How the TCF v2 Shared Libraries Can Help Integrators Encode and Decode Consent Strings and More

For the initial release of the Transparency and Consent Framework (TCF), the advertising industry worked tirelessly to release the specification, as well as the shared libraries that would help CMPs and vendors encode and decode consent strings in every major programming language. Even days before the deadline, the industry knew that having these types of resources as shared libraries was a good idea. This time around, with TCF v2.0, we’ve had more time to prepare and think through the types of resources we, as an industry, will need and we’ve worked together to build them.

Enter Chris Paterson of Conversant. Paterson, having helped create the v1 TCF shared libraries and as part of the IAB Tech Lab’s GDPR Commit Group, saw an opportunity to make these libraries even better for v2.0. Here is a Q&A with Paterson on why he wanted to work on this project and how it will help CMPs and vendors as they work through their TCF v2.0 integrations.

Question: Why the need for TCF v2.0 shared libraries?

Paterson: Let’s face it, privacy tech is complex and it’s a challenge to make sure all the signals are correct. Every player in the ecosystem is relying on these signals to be encoded and passed correctly. If ever there was, this is without a doubt the right place to create a standardized tool set. So it was natural that we come together with the IAB Tech Lab to create and manage an open source library that helps everyone with that shared need. 

TCF v2.0 is an order of magnitude more complex than TCF v1.1. We found that shared libraries were immensely helpful for TCF v1.1 so we knew that for TCF v2.0 it would be even more so. I’m proud to say that I think the tools we have built for TCF v2.0 are better than what we built for TCF v1.1. Open sourcing allows us to share expertise and learnings. Also, because so many use the libraries, we reap the benefit of distributed QA – testing edge cases and exercising the specification details – ensuring that we all implement critical aspects correctly, together, and in an elegant usable way. Together we are building a better, more reliable and refined product, which is critical for the adoption of the TCF generally – the more barriers we remove the smoother the path to adoption.

Question: What functionality can I find in the TypeScript/JavaScript library?

Paterson: The library can be found on github and in the public npm repository under the namespace @iabtcf. It was also used to create a website with a human-readable encoding and decoding tool that may be found at  

The library is modular and broken down into the following packages: 

The most critical features are in the “Core” module

Core (@iabtcf/core)

  • Encodes/Decodes a TC string into a “TCModel” and vice versa
  • Creates a wrapper around the Global Vendor List to help with common CMP use cases, like sorting, filtering, and language translations.

Additional modules include:

CmpApi (@iabtcf/cmpapi)
– Creates on-page cmp api __tcfapi()

Stub (@iabtcf/stub)
– Creates on page cmp api stub __tcfapi()

Testing (@iabtcf/testing)
– Tools for testing CMPs that leverage the libraries

cli (@iabtcf/cli)
– Command-line utility for decoding TC strings

Question: How did we improve the libraries for TCF v2.0?

Paterson: The v1 libraries were done quickly on the eve of GDPR going into effect in 2018.  We came out with just a bare-minimum encoding and decoding library in JavaScript, Java, and Swift called the ConsentStringSDK. Even though it was done quickly, it turned out to be a big success and fairly widely adopted; even a few non-official libraries sprung up in the community, implementing the same design, in some other languages as well. With the success of the ConsentStringSDK and a year of working with it we discovered opportunities to develop new features and identified more use cases that we wanted to support. Before the TCF v2.0 specification was final we began development and worked in parallel to the finalization, which supported both efforts.

Question: What other languages are available?

Paterson: A subgroup of the Tech Lab GDPR Working Group has been working on a Java version of the library. It’s more focused on high-performance decoding for high-volume server apps rather than an in-browser CMP tool. We are in the process of building out the encoding library currently and should have that available soon.

Question: What if I need a feature that’s not available, or find a bug?

Paterson: If you see something missing, or find a potential bug, please file a github issue. We comb through these issues daily. Please include as many details as possible about your issue so that we can investigate. These libraries are maintained by members of the Tech Lab working group.

Question: Why should others consider contributing to shared library/open source efforts in the future?

Paterson: The GDPR and other regulations impact all of us in digital marketing. The regulations are complex, and the software to support compliance equally complex. It’s not only fun to have a community working on this, but necessary, to help ensure that we design for many use cases, we test together thoroughly and we push forward industry adoption.


Chris Paterson
Senior/Lead Software Engineer