What are duplicates and how do we deal with them?

Duplicates are datasets within the GVL databases which relate to one single product but carry slightly deviating names due to typing or transmission errors and therefore appear in the database more than once. In short: Duplicates are unwanted twins of datasets.

GVL is working flat out to detect these data duplicates and collate them to one single dataset. Our performers benefit from this data cleansing: the clearer and more unambiguous our data pool is, the easier and faster it is for you to register contributions for the productions.

Please note: By removing duplicates, the overall amount of your contribution notifications might be reduced, but we will make sure that all notifications will be transferred to the remaining dataset and the remuneration is consequently not reduced. You also get a better overview of your productions as a result.