When the cited papers trigger citations, it follows that the new papers represent much of the scientific development of the subject of the Wikipedia article. That makes it obvious that in order to keep Wikipedia articles up to date, it is relevant to know the papers that cite these scholarly articles.
To help editors, Scholia already provides assistance. There is a Scholia for each paper; this one is for the paper I referred to earlier and, there is a random Scholia for a subject identified in Wikidata. Just think what a Scholia specific for a Wikipedia article could be and do.
Maintaining information is a drag and there are many bots that are used to append and amend the information that we have based on what we have in a Wikipedia article. First, there is the Internet Archive bot that save guards information on websites by making a copy for the Wayback Machine. For many of the references we know a DOI and it is easy enough to ensure that we know the associated paper in Wikidata. It is easy enough to make a process out of it for a single paper.
What it takes is a different take on processes; not the traditional serial process with serial results but a focused process that supports a user story.
In the initial phase of a review of a Wikipedia article, the Wikipedian starts a process that updates all the citations to the existing Wikipedia references. Once the process is done, the citations are known, these papers have been fleshed out with open information and the Wikipedia will know the latest science for the subject reviewed.
Much of the functionality exists, when the user story is supported, it is not only Wikipedians and scholars but also our public that is invited to share in the sum of what we know.